4 resultados para Uncover the meaning
em Glasgow Theses Service
Resumo:
When we take a step back from the imposing figure of physical violence, it becomes possible to examine other structurally violent forces that constantly shape our cultural and political landscapes. One of the driving interests in the “turn to Paul” in recent continental philosophy stems from wrestling with questions about the real nature of contemporary violence. Paul is positioned as a thinker whose messianic experience began to cut through the violent masquerade of the existing order. The crucifixion and resurrection of the Messiah (a slave and a God co-existing in one body) exposed the empty grounding upon which power resided. The Christ-event signifies a moment of violent interruption in the existing order which Paul enjoins the Gentiles to participate in through a dedication of love for the neighbour. This divine violence aims to reveal and subvert the “powers,” epitomised in the Roman Empire, in order to fulfil the labour of the Messianic now-time which had arrived. The impetus behind this research comes from a typically enigmatic and provocative section of text by the Slovene philosopher, cultural critic, and Christian atheist Slavoj Žižek. He claims that 'the notion of love should be given here all its Paulinian weight: the domain of pure violence… is the domain of love' (2008a, 173). In this move he links Paul’s idea of love to that of Walter Benjamin’s divine violence; the sublime and the cataclysmic come together in this seemingly perverse notion. At stake here is the way in which uncovering violent forces in the “zero-level” of our narrative worldviews aids the diagnosis of contemporary political and ethical issues. It is not enough to imagine Paul’s encounter with the Christ-event as non-violent. This Jewish apocalyptic movement was engaged in a violent struggle within an existing order that God’s wrath will soon dismantle. Paul’s weak violence, inspired by his fidelity to the Christ-event, places all responsibility over creation in the role of the individual within the collective body. The centre piece of this re-imagined construction of the Pauline narrative comes in Romans 13: the violent dedication to love understood in the radical nature of the now-time. 3 This research examines the role that narratives play in the creation and diagnosis of these violent forces. In order to construct a new genealogy of violence in Christianity it is crucial to understand the role of the slave of Christ (the revolutionary messianic subject). This turn in the Symbolic is examined through creating a literary structure in which we can approach a radical Nietzschean shift in Pauline thought. The claim here, a claim which is also central to Paul’s letters, is that when the symbolic violence which manipulates our worldviews is undone by a divine violence, if even for a moment, new possibilities are created in the opening for a transvaluation of values. Through this we uncover the nature of original sin: the consequences of the interconnected reality of our actions. The role of literature is vital in the construction of this narrative; starting with Cormac McCarthy’s No Country for Old Men, and continuing through works such as Melville’s Bartleby the Scrivener, this thesis draws upon the power of literature in the shaping of our narrative worlds. Typical of the continental philosophy at the heart of this work, a diverse range of illustrations and inspirations from fiction is pulled into its narrative to reflect the symbolic universe that this work was forged through. What this work attempts to do is give this theory a greater grounding in Paul’s letters by demonstrating this radical kenotic power at the heart of the Christ-event. Romans 13 reveals, in a way that has not yet been picked up by Critchley, Žižek, and others, that Paul opposed the biopolitical power of the Roman Empire through the weak violence of love that is the labour of the slaves of Christ on the “now-time” that had arrived.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
This thesis examines the manufacture, use, exchange (including gift exchange), collecting and commodification of German medals and badges from the early 18th century until the present-day, with particular attention being given to the symbols that were deployed by the National Socialist German Workers’ Party (NSDAP) between 1919 and 1945. It does so by focusing in particular on the construction of value through insignia, and how such badges and their symbolic and monetary value changed over time. In order to achieve this, the thesis adopts a chronological structure, which encompasses the creation of Prussia in 1701, the Napoleonic wars and the increased democratisation of military awards such as the Iron Cross during the Great War. The collapse of the Kaiserreich in 1918 was the major factor that led to the creation of the NSDAP under the eventual strangle-hold of Hitler, a fundamentally racist and anti-Semitic movement that continued the German tradition of awarding and wearing badges. The traditional symbols of Imperial Germany, such as the eagle, were then infused with the swastika, an emblem that was meant to signify anti-Semitism, thus creating a hybrid identity. This combination was then replicated en-masse, and eventually eclipsed all the symbols that had possessed symbolic significance in Germany’s past. After Hitler was appointed Chancellor in 1933, millions of medals and badges were produced in an effort to create a racially based “People’s Community”, but the steel and iron that were required for munitions eventually led to substitute materials being utilised and developed in order to manufacture millions of politically oriented badges. The Second World War unleashed Nazi terror across Europe, and the conscripts and volunteers who took part in this fight for living-space were rewarded with medals that were modelled on those that had been instituted during Imperial times. The colonial conquest and occupation of the East by the Wehrmacht, the Order Police and the Waffen-SS surpassed the brutality of former wars that finally culminated in the Holocaust, and some of these horrific crimes and the perpetrators of them were perversely rewarded with medals and badges. Despite Nazism being thoroughly discredited, many of the Allied soldiers who occupied Germany took part in the age-old practice of obtaining trophies of war, which reconfigured the meaning of Nazi badges as souvenirs, and began the process of their increased commodification on an emerging secondary collectors’ market. In order to analyse the dynamics of this market, a “basket” of badges is examined that enables a discussion of the role that aesthetics, scarcity and authenticity have in determining the price of the artefacts. In summary, this thesis demonstrates how the symbolic, socio-economic and exchange value of German military and political medals and badges has changed substantially over time, provides a stimulus for scholars to conduct research in this under-developed area, and encourages collectors to investigate the artefacts that they collect in a more historically contextualised manner.
Resumo:
With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).