902 resultados para Digital techniques
Resumo:
The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.
Resumo:
The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.
Resumo:
Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.
Resumo:
Robust, affine covariant, feature extractors provide a means to extract correspondences between images captured by widely separated cameras. Advances in wide baseline correspondence extraction require looking beyond the robust feature extraction and matching approach. This study examines new techniques of extracting correspondences that take advantage of information contained in affine feature matches. Methods of improving the accuracy of a set of putative matches, eliminating incorrect matches and extracting large numbers of additional correspondences are explored. It is assumed that knowledge of the camera geometry is not available and not immediately recoverable. The new techniques are evaluated by means of an epipolar geometry estimation task. It is shown that these methods enable the computation of camera geometry in many cases where existing feature extractors cannot produce sufficient numbers of accurate correspondences.
Resumo:
This chapter explores some of the practical and theoretical obstacles and opportunities for self-expression experienced by a group of Queer Dig- ital Storytellers who primarily make and distribute their stories online. “Queer” in this chapter encompasses a diverse range of gender and sexual identities and perspectives on same, including the heterosexual children of queer parents and heterosexual parents of queer children. As such it is also used as a unifying moniker by participants in the Rainbow Family Tree case study that is examined in this chapter. The Digital Storytellers in this case study are largely motivated by a desire to have an impact on social attitudes towards gender and sexuality, both in their personal province of friends and family, and in public domains constituted of unknown or invisible audiences. The privacy and publicity dilemmas that will be considered arise out of positioning personal stories in the public domain and the quandaries that emerge from an activist desire to speak truth to power that is located across a wide cross section of audiences.
Resumo:
This article examines social, cultural and technological change in the systems and economies of educational information management. Since the Sumerians first collected, organized and supervised administrative and religious records some six millennia ago, libraries have been key physical depositories and cultural signifiers in the production and mediation of social capital and power through education. To date, the textual, archival and discursive practices perpetuating libraries have remained exempt from inquiry. My aim here is to remedy this hiatus by making the library itself the terrain and object of critical analysis and investigation. The paper argues that in the three dominant communications eras—namely, oral, print and digital cultures—society’s centres of knowledge and learning have resided in the ceremony, the library and the cybrary respectively. In a broad-brush historical grid, each of these key educational institutions—the ceremony in oral culture, the library in print culture and the cybrary in digital culture—are mapped against social, cultural and technological orders pertaining to their era. Following a description of these shifts in society’s collective cultural memory, the paper then examines the question of what the development of global information systems and economies mean for schools and libraries of today, and for teachers and learners as knowledge consumers and producers?
Resumo:
In this paper we examine the problem of prediction with expert advice in a setup where the learner is presented with a sequence of examples coming from different tasks. In order for the learner to be able to benefit from performing multiple tasks simultaneously, we make assumptions of task relatedness by constraining the comparator to use a lesser number of best experts than the number of tasks. We show how this corresponds naturally to learning under spectral or structural matrix constraints, and propose regularization techniques to enforce the constraints. The regularization techniques proposed here are interesting in their own right and multitask learning is just one application for the ideas. A theoretical analysis of one such regularizer is performed, and a regret bound that shows benefits of this setup is reported.
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Facial expression recognition (FER) algorithms mainly focus on classification into a small discrete set of emotions or representation of emotions using facial action units (AUs). Dimensional representation of emotions as continuous values in an arousal-valence space is relatively less investigated. It is not fully known whether fusion of geometric and texture features will result in better dimensional representation of spontaneous emotions. Moreover, the performance of many previously proposed approaches to dimensional representation has not been evaluated thoroughly on publicly available databases. To address these limitations, this paper presents an evaluation framework for dimensional representation of spontaneous facial expressions using texture and geometric features. SIFT, Gabor and LBP features are extracted around facial fiducial points and fused with FAP distance features. The CFS algorithm is adopted for discriminative texture feature selection. Experimental results evaluated on the publicly accessible NVIE database demonstrate that fusion of texture and geometry does not lead to a much better performance than using texture alone, but does result in a significant performance improvement over geometry alone. LBP features perform the best when fused with geometric features. Distributions of arousal and valence for different emotions obtained via the feature extraction process are compared with those obtained from subjective ground truth values assigned by viewers. Predicted valence is found to have a more similar distribution to ground truth than arousal in terms of covariance or Bhattacharya distance, but it shows a greater distance between the means.
Understanding the mechanisms of graft union formation in solanaceae plants using in vitro techniques