822 resultados para approximately 1460-1495 -- Criticism and interpretation
Resumo:
In four chapters various aspects of earthquake source are studied.
Chapter I
Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.
Chapter II
The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.
Chapter III
Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."
Chapter IV
The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.
Resumo:
"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.
Resumo:
"Facts and Fictions: Feminist Literary Criticism and Cultural Critique, 1968-2012" is a critical history of the unfolding of feminist literary study in the US academy. It contributes to current scholarly efforts to revisit the 1970s by reconsidering often-repeated narratives about the critical naivety of feminist literary criticism in its initial articulation. As the story now goes, many of the most prominent feminist thinkers of the period engaged in unsophisticated literary analysis by conflating lived social reality with textual representation when they read works of literature as documentary evidence of real life. As a result, the work of these "bad critics," particularly Kate Millett and Andrea Dworkin, has not been fully accounted for in literary critical terms.
This dissertation returns to Dworkin and Millett's work to argue for a different history of feminist literary criticism. Rather than dismiss their work for its conflation of fact and fiction, I pay attention to the complexity at the heart of it, yielding a new perspective on the history and persistence of the struggle to use literary texts for feminist political ends. Dworkin and Millett established the centrality of reality and representation to the feminist canon debates of "the long 1970s," the sex wars of the 1980s, and the more recent feminist turn to memoir. I read these productive periods in feminist literary criticism from 1968 to 2012 through their varied commitment to literary works.
Chapter One begins with Millett, who de-aestheticized male-authored texts to treat patriarchal literature in relation to culture and ideology. Her mode of literary interpretation was so far afield from the established methods of New Criticism that she was not understood as a literary critic. She was repudiated in the feminist literary criticism that followed her and sought sympathetic methods for reading women's writing. In that decade, the subject of Chapter Two, feminist literary critics began to judge texts on the basis of their ability to accurately depict the reality of women's experiences.
Their vision of the relationship between life and fiction shaped arguments about pornography during the sex wars of the 1980s, the subject of Chapter Three. In this context, Dworkin was feminism's "bad critic." I focus on the literary critical elements of Dworkin's theories of pornographic representation and align her with Millett as a miscategorized literary critic. In the decades following the sex wars, many of the key feminist literary critics of the founding generation (including Dworkin, Jane Gallop, Carolyn Heilbrun, and Millett) wrote memoirs that recounted, largely in experiential terms, the history this dissertation examines. Chapter Four considers the story these memoirists told about the rise and fall of feminist literary criticism. I close with an epilogue on the place of literature in a feminist critical enterprise that has shifted toward privileging theory.
Resumo:
Introduction: Dominant ideas of modern study: unity, induction, evolution.--book I. Literary morphology: varieties of literature and their underlying principles.--book II. The field and scope of literary study.--book III. Literary evolution as reflected in the history of world literature.--book IV. Literary criticism: the traditional confusion and the modern reconstruction.--book V. Literature as a mode of philosophy.--book VI. Literature as a mode of art. Conclusion: the traditional and the modern study of literature. Syllabus. Works of the author. General index. Seventh impression, June, 1928
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
How is contemporary culture 'framed' - understood, promoted, dissected and defended - in the new approaches being employed in university education today? How do these approaches compare with those seen in the public policy process? What are the implications of these differences for future directions in theory, education, activism and policy? Framing Culture looks at cultural and media studies, which are rapidly growing fields through which students are introduced to contemporary cultural industries such as television, film and video. It compares these approaches with those used to frame public policy and finds a striking lack of correspondence between them. Issues such as Australian content on commercial television and in advertising, new technologies and new media, and violence in the media all highlight the gap between contemporary cultural theories and the way culture and communications are debated in public policy. The reasons for this gap must be investigated before closer relations can be established. Framing Culture brings together cultural studies and policy studies in a lively and innovative way. It suggests avenues for cultural activism that have been neglected in cultural theory and practice, and it will provoke debates which are long overdue.
Resumo:
This paper discusses the principal domains of auto- and cross-trispectra. It is shown that the cumulant and moment based trispectra are identical except on certain planes in trifrequency space. If these planes are avoided, their principal domains can be derived by considering the regions of symmetry of the fourth order spectral moment. The fourth order averaged periodogram will then serve as an estimate for both cumulant and moment trispectra. Statistics of estimates of normalised trispectra or tricoherence are also discussed.
Resumo:
Urban maps discusses new ways and tools to read and navigate the contemporary city. Each chapter investigates a possible approach to unravel the complexity of contemporary urban forms. Each tool is first defined, introducing its philosophical background, and is then discussed with case studies, showing its relevance for the navigation of the built environment. Urbanism classics such as the work of Lynch, Jacobs, Venuti and Scott-Brown, Lefebrve and Walter Benjamin are fundamental in setting the framework of the volume. In the introduction cities and mapping are first discussed, the former are illustrated as ‘a composite of invisible networks devoid of landmarks and overrun by nodes’ (p. 3), and ‘a series of unbounded spaces where mass production and mass consumption reproduce a standardised quasi-global culture’ (p. 6).
Resumo:
Cell trajectory data is often reported in the experimental cell biology literature to distinguish between different types of cell migration. Unfortunately, there is no accepted protocol for designing or interpreting such experiments and this makes it difficult to quantitatively compare different published data sets and to understand how changes in experimental design influence our ability to interpret different experiments. Here, we use an individual based mathematical model to simulate the key features of a cell trajectory experiment. This shows that our ability to correctly interpret trajectory data is extremely sensitive to the geometry and timing of the experiment, the degree of motility bias and the number of experimental replicates. We show that cell trajectory experiments produce data that is most reliable when the experiment is performed in a quasi 1D geometry with a large number of identically{prepared experiments conducted over a relatively short time interval rather than few trajectories recorded over particularly long time intervals.
Resumo:
An updated version, this excellent text is a timely addition to the library of any nurse researching in oncology or other settings where individuals’ quality of life must be understood. Health-related quality of life should be a central aspect of studies concerned with health and illness. Indeed, considerable evidence has recently emerged in oncology and other research settings that selfreported quality of life is of great prognostic significance and may be the most reliable predictor of subsequent morbidity and mortality. From a nursing perspective, it is also gratifying to note that novel therapy and other oncology studies increasingly recognize the importance of understanding patients’ subjective experiences of an intervention over time and to ascertain whether patients perceive that a new intervention makes a difference to their quality of life and treatment outcomes. Measurements of quality of life are now routine in clinical trials of chemotherapy drugs and are often considered the prime outcome of interest in the cost/benefit analyses of these treatments. The authors have extensive experience in qualityof- life assessment in cancer clinical trials, where most of the pioneering work into quality of life has been conducted. That said, many of the health-related qualityof- life issues discussed are common to many illnesses, and researchers outside of cancer should find the book equally helpful.