10 resultados para Wyoming Massacre, 1778.
em CentAUR: Central Archive University of Reading - UK
Resumo:
Much is made of the viscerally disturbing qualities embedded in The Texas Chain Saw Massacre - human bodies are traumatised, mutilated and distorted – and the way these are matched by close and often intense access to the performers involved. Graphic violence focused on the body specifically indicates the film as a key contemporary horror text. Yet, for all this closeness to the performers, it soon becomes clear in undertaking close-analysis of the film that access to them is equally characterised by extreme distance, both spatially and cognitively. The issue of distance is particularly striking, not least because of its ramifications on engagement, which throws up various aesthetic and methodological questions concerning performers’ expressive authenticity. This article considers the lack of access to performance in The Texas Chain Saw Massacre, paying particular attention to how this fits in with contemporaneous presentations of performance more generally, as seen in films such as Junior Bonner (Sam Peckinpah, 1972). As part of this investigation I consider the affect of such a severe disruption to access on engagement with, and discussion of, performance. At the heart of this investigation lie methodological considerations of the place of performance analysis in the post-studio period. How can we perceive anything of a character’s interior life, and therefore engage with performers who we fundamentally lack access to? Does such an apparently significant difference in the way performers and their embodiment is treated mean that they can even be thought of as delivering a performance?
Resumo:
In order to make a full evaluation of an interconnection network, it is essential to estimate the minimum size of a largest connected component of this network provided the faulty vertices in the network may break its connectedness. Star graphs are recognized as promising candidates for interconnection networks. This article addresses the size of a largest connected component of a faulty star graph. We prove that, in an n-star graph (n >= 3) with up to 2n-4 faulty vertices, all fault-free vertices but at most two form a connected component. Moreover, all fault-free vertices but exactly two form a connected component if and only if the set of all faulty vertices is equal to the neighbourhood of a pair of fault-free adjacent vertices. These results show that star graphs exhibit excellent fault-tolerant abilities in the sense that there exists a large functional network in a faulty star graph.
Resumo:
A drag law accounting for Ekman rotation adjacent to a flat, horizontal bou ndary is proposed for use in a plume model that is written in terms of the depth-mean velocity. The drag l aw contains a variable turning angle between the mean velocity and the drag imposed by the turbulent bound ary layer. The effect of the variable turning angle in the drag law is studied for a plume of ice shelf wat er (ISW) ascending and turning beneath an Antarctic ice shelf with draft decreasing away from the groundi ng line. As the ISW plume ascends the sloping ice shelf–ocean boundary, it can melt the ice shelf, wh ich alters the buoyancy forcing driving the plume motion. Under these conditions, the typical turning ang le is of order 10° over most of the plume area for a range of drag coefficients (the minus sign arises for th e Southern Hemisphere). The rotation of the drag with respect to the mean velocity is found to be signifi cant if the drag coefficient exceeds 0.003; in this case the plume body propagates farther along and across the b ase of the ice shelf than a plume with the standard quadratic drag law with no turning angle.
Resumo:
We use a stratosphere–troposphere composition–climate model with interactive sulfur chemistry and aerosol microphysics, to investigate the effect of the 1991 Mount Pinatubo eruption on stratospheric aerosol properties. Satellite measurements indicate that shortly after the eruption, between 14 and 23 Tg of SO2 (7 to 11.5 Tg of sulfur) was present in the tropical stratosphere. Best estimates of the peak global stratospheric aerosol burden are in the range 19 to 26 Tg, or 3.7 to 6.7 Tg of sulfur assuming a composition of between 59 and 77 % H2SO4. In light of this large uncertainty range, we performed two main simulations with 10 and 20 Tg of SO2 injected into the tropical lower stratosphere. Simulated stratospheric aerosol properties through the 1991 to 1995 period are compared against a range of available satellite and in situ measurements. Stratospheric aerosol optical depth (sAOD) and effective radius from both simulations show good qualitative agreement with the observations, with the timing of peak sAOD and decay timescale matching well with the observations in the tropics and mid-latitudes. However, injecting 20 Tg gives a factor of 2 too high stratospheric aerosol mass burden compared to the satellite data, with consequent strong high biases in simulated sAOD and surface area density, with the 10 Tg injection in much better agreement. Our model cannot explain the large fraction of the injected sulfur that the satellite-derived SO2 and aerosol burdens indicate was removed within the first few months after the eruption. We suggest that either there is an additional alternative loss pathway for the SO2 not included in our model (e.g. via accommodation into ash or ice in the volcanic cloud) or that a larger proportion of the injected sulfur was removed via cross-tropopause transport than in our simulations. We also critically evaluate the simulated evolution of the particle size distribution, comparing in detail to balloon-borne optical particle counter (OPC) measurements from Laramie, Wyoming, USA (41° N). Overall, the model captures remarkably well the complex variations in particle concentration profiles across the different OPC size channels. However, for the 19 to 27 km injection height-range used here, both runs have a modest high bias in the lowermost stratosphere for the finest particles (radii less than 250 nm), and the decay timescale is longer in the model for these particles, with a much later return to background conditions. Also, whereas the 10 Tg run compared best to the satellite measurements, a significant low bias is apparent in the coarser size channels in the volcanically perturbed lower stratosphere. Overall, our results suggest that, with appropriate calibration, aerosol microphysics models are capable of capturing the observed variation in particle size distribution in the stratosphere across both volcanically perturbed and quiescent conditions. Furthermore, additional sensitivity simulations suggest that predictions with the models are robust to uncertainties in sub-grid particle formation and nucleation rates in the stratosphere.
Resumo:
We propose a topological approach to the problem of determining a curve from its iterated integrals. In particular, we prove that a family of terms in the signature series of a two dimensional closed curve with finite p-variation, 1≤p<2, are in fact moments of its winding number. This relation allows us to prove that the signature series of a class of simple non-smooth curves uniquely determine the curves. This implies that outside a Chordal SLEκ null set, where 0<κ≤4, the signature series of curves uniquely determine the curves. Our calculations also enable us to express the Fourier transform of the n-point functions of SLE curves in terms of the expected signature of SLE curves. Although the techniques used in this article are deterministic, the results provide a platform for studying SLE curves through the signatures of their sample paths.
Resumo:
A recent field campaign in southwest England used numerical modeling integrated with aircraft and radar observations to investigate the dynamic and microphysical interactions that can result in heavy convective precipitation. The COnvective Precipitation Experiment (COPE) was a joint UK-US field campaign held during the summer of 2013 in the southwest peninsula of England, designed to study convective clouds that produce heavy rain leading to flash floods. The clouds form along convergence lines that develop regularly due to the topography. Major flash floods have occurred in the past, most famously at Boscastle in 2004. It has been suggested that much of the rain was produced by warm rain processes, similar to some flash floods that have occurred in the US. The overarching goal of COPE is to improve quantitative convective precipitation forecasting by understanding the interactions of the cloud microphysics and dynamics and thereby to improve NWP model skill for forecasts of flash floods. Two research aircraft, the University of Wyoming King Air and the UK BAe 146, obtained detailed in situ and remote sensing measurements in, around, and below storms on several days. A new fast-scanning X-band dual-polarization Doppler radar made 360-deg volume scans over 10 elevation angles approximately every 5 minutes, and was augmented by two UK Met Office C-band radars and the Chilbolton S-band radar. Detailed aerosol measurements were made on the aircraft and on the ground. This paper: (i) provides an overview of the COPE field campaign and the resulting dataset; (ii) presents examples of heavy convective rainfall in clouds containing ice and also in relatively shallow clouds through the warm rain process alone; and (iii) explains how COPE data will be used to improve high-resolution NWP models for operational use.