916 resultados para NATURE, HEALING POWER OF
Resumo:
The purpose of this hermeneutic phenomenological study was to explore students’ experiences with the power of their instructors in a higher education classroom. This study provides a deeper understanding of instructor power from student perspectives to inform teaching practices in the higher education classroom.
Resumo:
Contexte La connectomique, ou la cartographie des connexions neuronales, est un champ de recherche des neurosciences évoluant rapidement, promettant des avancées majeures en ce qui concerne la compréhension du fonctionnement cérébral. La formation de circuits neuronaux en réponse à des stimuli environnementaux est une propriété émergente du cerveau. Cependant, la connaissance que nous avons de la nature précise de ces réseaux est encore limitée. Au niveau du cortex visuel, qui est l’aire cérébrale la plus étudiée, la manière dont les informations se transmettent de neurone en neurone est une question qui reste encore inexplorée. Cela nous invite à étudier l’émergence des microcircuits en réponse aux stimuli visuels. Autrement dit, comment l’interaction entre un stimulus et une assemblée cellulaire est-elle mise en place et modulée? Méthodes En réponse à la présentation de grilles sinusoïdales en mouvement, des ensembles neuronaux ont été enregistrés dans la couche II/III (aire 17) du cortex visuel primaire de chats anesthésiés, à l’aide de multi-électrodes en tungstène. Des corrélations croisées ont été effectuées entre l’activité de chacun des neurones enregistrés simultanément pour mettre en évidence les liens fonctionnels de quasi-synchronie (fenêtre de ± 5 ms sur les corrélogrammes croisés corrigés). Ces liens fonctionnels dévoilés indiquent des connexions synaptiques putatives entre les neurones. Par la suite, les histogrammes peri-stimulus (PSTH) des neurones ont été comparés afin de mettre en évidence la collaboration synergique temporelle dans les réseaux fonctionnels révélés. Enfin, des spectrogrammes dépendants du taux de décharges entre neurones ou stimulus-dépendants ont été calculés pour observer les oscillations gamma dans les microcircuits émergents. Un indice de corrélation (Rsc) a également été calculé pour les neurones connectés et non connectés. Résultats Les neurones liés fonctionnellement ont une activité accrue durant une période de 50 ms contrairement aux neurones fonctionnellement non connectés. Cela suggère que les connexions entre neurones mènent à une synergie de leur inter-excitabilité. En outre, l’analyse du spectrogramme dépendant du taux de décharge entre neurones révèle que les neurones connectés ont une plus forte activité gamma que les neurones non connectés durant une fenêtre d’opportunité de 50ms. L’activité gamma de basse-fréquence (20-40 Hz) a été associée aux neurones à décharge régulière (RS) et l’activité de haute fréquence (60-80 Hz) aux neurones à décharge rapide (FS). Aussi, les neurones fonctionnellement connectés ont systématiquement un Rsc plus élevé que les neurones non connectés. Finalement, l’analyse des corrélogrammes croisés révèle que dans une assemblée neuronale, le réseau fonctionnel change selon l’orientation de la grille. Nous démontrons ainsi que l’intensité des relations fonctionnelles dépend de l’orientation de la grille sinusoïdale. Cette relation nous a amené à proposer l’hypothèse suivante : outre la sélectivité des neurones aux caractères spécifiques du stimulus, il y a aussi une sélectivité du connectome. En bref, les réseaux fonctionnels «signature » sont activés dans une assemblée qui est strictement associée à l’orientation présentée et plus généralement aux propriétés des stimuli. Conclusion Cette étude souligne le fait que l’assemblée cellulaire, plutôt que le neurone, est l'unité fonctionnelle fondamentale du cerveau. Cela dilue l'importance du travail isolé de chaque neurone, c’est à dire le paradigme classique du taux de décharge qui a été traditionnellement utilisé pour étudier l'encodage des stimuli. Cette étude contribue aussi à faire avancer le débat sur les oscillations gamma, en ce qu'elles surviennent systématiquement entre neurones connectés dans les assemblées, en conséquence d’un ajout de cohérence. Bien que la taille des assemblées enregistrées soit relativement faible, cette étude suggère néanmoins une intrigante spécificité fonctionnelle entre neurones interagissant dans une assemblée en réponse à une stimulation visuelle. Cette étude peut être considérée comme une prémisse à la modélisation informatique à grande échelle de connectomes fonctionnels.
Resumo:
In this dissertation, I offer a pedagogical proposal for learning the Christian Scriptures guided by respect for the nature of the reader and the integrity of the biblical text. Christian educators have profitably developed recent theoretical interest in the body’s role in human meaning with regard to worship and praxis methodologies, but the implications of this research for communal study of the biblical text merit further development. I make the case for adopting scriptural imagination as the goal of pedagogically constructed encounters with the Christian Scriptures. The argument proceeds through a series of questions addressing both sides of the text/reader encounter.
Chapter one considers the question “what is the nature of the reader and, subsequently, the shape of the reader’s ways of knowing?” This investigation into recent literature on the body’s involvement in human knowing includes related epistemological shifts with Christian education. On the basis of this survey, imagination emerges as a compelling designator of an incorporative, constructive creaturely capacity that gives rise to a way of being in the world. Teachers of Scripture who intend to participate in Christian formation should account for the imagination’s centrality for all knowing. After briefly situating this proposal within a theological account of creatureliness, I make the initial case for Scriptural imagination as a pedagogical aim.
Imagination as creaturely capacity addresses the first guiding value, but does this proposal also respect the integrity and nature of the biblical text, and specifically of biblical narratives? In response, in chapter two I take up the Acts of the Apostles as a potential test case and exemplar for the dynamics pertinent to the formation of imagination. Drawing on secondary literature on the genre and literary features of Acts, I conclude that Acts coheres with this project’s explicit interest in imagination as a central component of the process of Christian formation in relationship to the Scriptures.
Chapters three and four each take up a pericope from Acts to assess whether the theoretical perspectives developed in prior chapters generate any interpretive payoff. In each of these chapters, a particular story within Acts functions as a test case for readings of biblical narratives guided by a concern for scriptural imagination. Each of these chapters begins with further theoretical development of some element of imaginal formation. Chapter three provides a theoretical account of practices as they relate to imagination, bringing that theory into conversation with Peter’s engagement in hospitality practices with Cornelius in Acts 10:1-11:18. Chapter four discusses the formative power of narratives, with implications for the analysis of Paul’s shipwreck in Acts 27:1-28:16.
In the final chapter, I offer a two-part constructive pedagogical proposal for reading scriptural narratives in Christian communities. First, I suggest adopting resonance above relevance as the goal of pedagogically constructed encounters with the Scriptures. Second, I offer three ways of reading with the body, including the physical, ecclesial, and social bodies that shape all learning. I conclude by identifying the importance of scriptural imagination for Christian formation and witness in the twenty-first century.
Resumo:
Contexte La connectomique, ou la cartographie des connexions neuronales, est un champ de recherche des neurosciences évoluant rapidement, promettant des avancées majeures en ce qui concerne la compréhension du fonctionnement cérébral. La formation de circuits neuronaux en réponse à des stimuli environnementaux est une propriété émergente du cerveau. Cependant, la connaissance que nous avons de la nature précise de ces réseaux est encore limitée. Au niveau du cortex visuel, qui est l’aire cérébrale la plus étudiée, la manière dont les informations se transmettent de neurone en neurone est une question qui reste encore inexplorée. Cela nous invite à étudier l’émergence des microcircuits en réponse aux stimuli visuels. Autrement dit, comment l’interaction entre un stimulus et une assemblée cellulaire est-elle mise en place et modulée? Méthodes En réponse à la présentation de grilles sinusoïdales en mouvement, des ensembles neuronaux ont été enregistrés dans la couche II/III (aire 17) du cortex visuel primaire de chats anesthésiés, à l’aide de multi-électrodes en tungstène. Des corrélations croisées ont été effectuées entre l’activité de chacun des neurones enregistrés simultanément pour mettre en évidence les liens fonctionnels de quasi-synchronie (fenêtre de ± 5 ms sur les corrélogrammes croisés corrigés). Ces liens fonctionnels dévoilés indiquent des connexions synaptiques putatives entre les neurones. Par la suite, les histogrammes peri-stimulus (PSTH) des neurones ont été comparés afin de mettre en évidence la collaboration synergique temporelle dans les réseaux fonctionnels révélés. Enfin, des spectrogrammes dépendants du taux de décharges entre neurones ou stimulus-dépendants ont été calculés pour observer les oscillations gamma dans les microcircuits émergents. Un indice de corrélation (Rsc) a également été calculé pour les neurones connectés et non connectés. Résultats Les neurones liés fonctionnellement ont une activité accrue durant une période de 50 ms contrairement aux neurones fonctionnellement non connectés. Cela suggère que les connexions entre neurones mènent à une synergie de leur inter-excitabilité. En outre, l’analyse du spectrogramme dépendant du taux de décharge entre neurones révèle que les neurones connectés ont une plus forte activité gamma que les neurones non connectés durant une fenêtre d’opportunité de 50ms. L’activité gamma de basse-fréquence (20-40 Hz) a été associée aux neurones à décharge régulière (RS) et l’activité de haute fréquence (60-80 Hz) aux neurones à décharge rapide (FS). Aussi, les neurones fonctionnellement connectés ont systématiquement un Rsc plus élevé que les neurones non connectés. Finalement, l’analyse des corrélogrammes croisés révèle que dans une assemblée neuronale, le réseau fonctionnel change selon l’orientation de la grille. Nous démontrons ainsi que l’intensité des relations fonctionnelles dépend de l’orientation de la grille sinusoïdale. Cette relation nous a amené à proposer l’hypothèse suivante : outre la sélectivité des neurones aux caractères spécifiques du stimulus, il y a aussi une sélectivité du connectome. En bref, les réseaux fonctionnels «signature » sont activés dans une assemblée qui est strictement associée à l’orientation présentée et plus généralement aux propriétés des stimuli. Conclusion Cette étude souligne le fait que l’assemblée cellulaire, plutôt que le neurone, est l'unité fonctionnelle fondamentale du cerveau. Cela dilue l'importance du travail isolé de chaque neurone, c’est à dire le paradigme classique du taux de décharge qui a été traditionnellement utilisé pour étudier l'encodage des stimuli. Cette étude contribue aussi à faire avancer le débat sur les oscillations gamma, en ce qu'elles surviennent systématiquement entre neurones connectés dans les assemblées, en conséquence d’un ajout de cohérence. Bien que la taille des assemblées enregistrées soit relativement faible, cette étude suggère néanmoins une intrigante spécificité fonctionnelle entre neurones interagissant dans une assemblée en réponse à une stimulation visuelle. Cette étude peut être considérée comme une prémisse à la modélisation informatique à grande échelle de connectomes fonctionnels.
Resumo:
Countering the trend in contemporary ecocriticism to advance realism as an environmentally responsible mode of representation, this essay argues that the anti-realist aesthetics of literary modernism were implicitly “ecological.” In order to make this argument I distinguish between contemporary and modernist ecological culture (both of which I differentiate in turn from ecological science); while the former is concerned primarily with the practical reform characteristic of what we now call “environmentalism,” the latter demanded an all-encompassing reimagination of the relationship between humanity and nature. “Modernist ecology,” as I call it, attempted to envision this change, which would be ontological or metaphysical rather than simply social, through thematically and formally experimental works of art. Its radical vision, suggestive in some ways of today’s “deep” ecology, repudiated modern accounts of nature as a congeries of inert objects to be manipulated by a sovereign subject, and instead foregrounded the chiasmic intertexture of the subject/object relationship. In aesthetic modernism we encounter not “objective” nature, but “nature-being” – a blank substratum beneath the solid contours of what philosopher Kate Soper calls “lay nature” – the revelation of which shatters historical constructions of nature and alone allows for radical alternatives. This essay looks specifically at modernist ecology as it appears in the works of W. B. Yeats, D. H. Lawrence, and Samuel Beckett, detailing their attempts to envision revolutionary new ecologies, but also their struggles with the limited capacity of esoteric modernist art to effect significant ecological change on a collective level.
Resumo:
Recently, resilience has become a catchall solution for some of the world’s most pressing ecological, economic and social problems. This dissertation analyzes the cultural politics of resilience in Kingston, Jamaica by examining them through their purported universal principles of adaptation and flexibility. On the one hand, mainstream development regimes conceptualize resilience as a necessary and positive attribute of economies, societies and cultures if we are to survive any number of disasters or disturbances. Therefore, in Jamaican cultural and development policy resilience is championed as both a means and an end of development. On the other hand, critics of resilience see the new rollout of resilience projects as deepening neoliberalism, capitalism and new forms of governmentality because resilience projects provide the terrain for new forms of securitization and surveillance practices. These scholars argue that resilience often forecloses the possibilities to resist that which threatens us. However, rather than dismissing resilience as solely a sign of domination and governmentality, this dissertation argues that resilience must be understood as much more ambiguous and complex, rather than within binaries such as subversion vs. neoliberal and resistance vs. resilience. Overly simplistic dualities of this nature have been the dominant approach in the scholarship thus far. This dissertation provides a close analysis of resilience in both multilateral and Jamaican government policy documents, while exploring the historical and contemporary production of resilience in the lives of marginalized populations. Through three sites within Kingston, Jamaica—namely dancehall and street dances, WMW-Jamaica and the activist platform SO((U))L HQ—this dissertation demonstrates that “resilience” is best understood as an ambiguous site of power negotiations, social reproduction and survival in Jamaica today. It is often precisely this ambiguous power of ordinary resilience that is capitalized on and exploited to the detriment of vulnerable groups. At once demonstrating creative negotiation and reproduction of colonial capitalist social relations within the realms of NGO, activist work and cultural production, this dissertation demonstrates the complexity of resilience. Ultimately, this dissertation draws attention to the importance of studying spaces of cultural production in order to understand the power and limits of contemporary policy discourses and political economy.
Resumo:
This paper examines the social dynamics of electronic exchanges in the human services, particularly in social work. It focuses on the observable effects that email and texting have on the linguistic, relational and clinical rather than managerial aspects of the profession. It highlights how electronic communication is affecting professionals in their practice and learners as they become acculturated to social work. What are the gains and losses of the broad use of electronic devices in daily lay and professional, verbal and non-verbal communication? Will our current situation be seriously detrimental to the demeanor of future practitioners, their use of language, and their ability to establish close personal relationships? The paper analyzes social work linguistic and behavioral changes in light of the growth of electronic communication and offers a summary of merits and demerits viewed through a prism emerging from Baron’s (2000) analysis of human communication.
Resumo:
Este artículo hace una reconstrucción crítica de la visión de Keynes sobre la relación entre gasto público, tipo de interés, salarios y desempleo, tal y como se formula en su Tratado sobre el Dinero. El trabajo defiende que el enfoque de Keynes lleva a propuestas de política económica que enfatizan la necesidad de intervención estatal directa en la provisión de bienes y servicios. Esta conclusión se deriva de una interpretación circuitista de su obra.
Resumo:
In Marxist frameworks “distributive justice” depends on extracting value through a centralized state. Many new social movements—peer to peer economy, maker activism, community agriculture, queer ecology, etc.—take the opposite approach, keeping value in its unalienated form and allowing it to freely circulate from the bottom up. Unlike Marxism, there is no general theory for bottom-up, unalienated value circulation. This paper examines the concept of “generative justice” through an historical contrast between Marx’s writings and the indigenous cultures that he drew upon. Marx erroneously concluded that while indigenous cultures had unalienated forms of production, only centralized value extraction could allow the productivity needed for a high quality of life. To the contrary, indigenous cultures now provide a robust model for the “gift economy” that underpins open source technological production, agroecology, and restorative approaches to civil rights. Expanding Marx’s concept of unalienated labor value to include unalienated ecological (nonhuman) value, as well as the domain of freedom in speech, sexual orientation, spirituality and other forms of “expressive” value, we arrive at an historically informed perspective for generative justice.
Resumo:
The notion of sediment-transport capacity has been engrained in geomorphological and related literature for over 50 years, although its earliest roots date back explicitly to Gilbert in fluvial geomorphology in the 1870s and implicitly to eighteenth to nineteenth century developments in engineering. Despite cross fertilization between different process domains, there seem to have been independent inventions of the idea in aeolian geomorphology by Bagnold in the 1930s and in hillslope studies by Ellison in the 1940s. Here we review the invention and development of the idea of transport capacity in the fluvial, aeolian, coastal, hillslope, débris flow, and glacial process domains. As these various developments have occurred, different definitions have been used, which makes it both a difficult concept to test, and one that may lead to poor communications between those working in different domains of geomorphology. We argue that the original relation between the power of a flow and its ability to transport sediment can be challenged for three reasons. First, as sediment becomes entrained in a flow, the nature of the flow changes and so it is unreasonable to link the capacity of the water or wind only to the ability of the fluid to move sediment. Secondly, environmental sediment transport is complicated, and the range of processes involved in most movements means that simple relationships are unlikely to hold, not least because the movement of sediment often changes the substrate, which in turn affects the flow conditions. Thirdly, the inherently stochastic nature of sediment transport means that any capacity relationships do not scale either in time or in space. Consequently, new theories of sediment transport are needed to improve understanding and prediction and to guide measurement and management of all geomorphic systems.
Resumo:
Landnutzungsänderungen sind eine wesentliche Ursache von Treibhausgasemissionen. Die Umwandlung von Ökosystemen mit permanenter natürlicher Vegetation hin zu Ackerbau mit zeitweise vegetationslosem Boden (z.B. nach der Bodenbearbeitung vor der Aussaat) führt häufig zu gesteigerten Treibhausgasemissionen und verminderter Kohlenstoffbindung. Weltweit dehnt sich Ackerbau sowohl in kleinbäuerlichen als auch in agro-industriellen Systemen aus, häufig in benachbarte semiaride bis subhumide Rangeland Ökosysteme. Die vorliegende Arbeit untersucht Trends der Landnutzungsänderung im Borana Rangeland Südäthiopiens. Bevölkerungswachstum, Landprivatisierung und damit einhergehende Einzäunung, veränderte Landnutzungspolitik und zunehmende Klimavariabilität führen zu raschen Veränderungen der traditionell auf Tierhaltung basierten, pastoralen Systeme. Mittels einer Literaturanalyse von Fallstudien in ostafrikanischen Rangelands wurde im Rahmen dieser Studie ein schematisches Modell der Zusammenhänge von Landnutzung, Treibhausgasemissionen und Kohlenstofffixierung entwickelt. Anhand von Satellitendaten und Daten aus Haushaltsbefragungen wurden Art und Umfang von Landnutzungsänderungen und Vegetationsveränderungen an fünf Untersuchungsstandorten (Darito/Yabelo Distrikt, Soda, Samaro, Haralo, Did Mega/alle Dire Distrikt) zwischen 1985 und 2011 analysiert. In Darito dehnte sich die Ackerbaufläche um 12% aus, überwiegend auf Kosten von Buschland. An den übrigen Standorten blieb die Ackerbaufläche relativ konstant, jedoch nahm Graslandvegetation um zwischen 16 und 28% zu, während Buschland um zwischen 23 und 31% abnahm. Lediglich am Standort Haralo nahm auch „bare land“, vegetationslose Flächen, um 13% zu. Faktoren, die zur Ausdehnung des Ackerbaus führen, wurden am Standort Darito detaillierter untersucht. GPS Daten und anbaugeschichtlichen Daten von 108 Feldern auf 54 Betrieben wurden in einem Geographischen Informationssystem (GIS) mit thematischen Boden-, Niederschlags-, und Hangneigungskarten sowie einem Digitales Höhenmodell überlagert. Multiple lineare Regression ermittelte Hangneigung und geographische Höhe als signifikante Erklärungsvariablen für die Ausdehnung von Ackerbau in niedrigere Lagen. Bodenart, Entfernung zum saisonalen Flusslauf und Niederschlag waren hingegen nicht signifikant. Das niedrige Bestimmtheitsmaß (R²=0,154) weist darauf hin, dass es weitere, hier nicht erfasste Erklärungsvariablen für die Richtung der räumlichen Ausweitung von Ackerland gibt. Streudiagramme zu Ackergröße und Anbaujahren in Relation zu geographischer Höhe zeigen seit dem Jahr 2000 eine Ausdehnung des Ackerbaus in Lagen unter 1620 müNN und eine Zunahme der Schlaggröße (>3ha). Die Analyse der phänologischen Entwicklung von Feldfrüchten im Jahresverlauf in Kombination mit Niederschlagsdaten und normalized difference vegetation index (NDVI) Zeitreihendaten dienten dazu, Zeitpunkte besonders hoher (Begrünung vor der Ernte) oder niedriger (nach der Bodenbearbeitung) Pflanzenbiomasse auf Ackerland zu identifizieren, um Ackerland und seine Ausdehnung von anderen Vegetationsformen fernerkundlich unterscheiden zu können. Anhand der NDVI Spektralprofile konnte Ackerland gut Wald, jedoch weniger gut von Gras- und Buschland unterschieden werden. Die geringe Auflösung (250m) der Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI Daten führte zu einem Mixed Pixel Effect, d.h. die Fläche eines Pixels beinhaltete häufig verschiedene Vegetationsformen in unterschiedlichen Anteilen, was deren Unterscheidung beeinträchtigte. Für die Entwicklung eines Echtzeit Monitoring Systems für die Ausdehnung des Ackerbaus wären höher auflösende NDVI Daten (z.B. Multispektralband, Hyperion EO-1 Sensor) notwendig, um kleinräumig eine bessere Differenzierung von Ackerland und natürlicher Rangeland-Vegetation zu erhalten. Die Entwicklung und der Einsatz solcher Methoden als Entscheidungshilfen für Land- und Ressourcennutzungsplanung könnte dazu beitragen, Produktions- und Entwicklungsziele der Borana Landnutzer mit nationalen Anstrengungen zur Eindämmung des Klimawandels durch Steigerung der Kohlenstofffixierung in Rangelands in Einklang zu bringen.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Cognitive radio (CR) is fast emerging as a promising technology that can meet the machine-to machine (M2M) communication requirements for spectrum utilization and power control for large number of machines/devices expected to be connected to the Internet-of Things (IoT). Power control in CR as a secondary user can been modelled as a non-cooperative game cost function to quantify and reduce its effects of interference while occupying the same spectrum as primary user without adversely affecting the required quality of service (QoS) in the network. In this paper a power loss exponent that factors in diverse operating environments for IoT is employed in the non-cooperative game cost function to quantify the required power of transmission in the network. The approach would enable various CRs to transmit with lesser power thereby saving battery consumption or increasing the number of secondary users thereby optimizing the network resources efficiently.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
Nas três narrativas que constituem The New York Trilogy – City of Glass, Ghosts e The Locked Room – Paul Auster emprega e desconstrói os elementos convencionais do romance policial, elaborando uma investigação recorrente da natureza, função e significado da linguagem, mas também da solidão, do encerramento e da problemática da identidade. A produção narrativa de Auster evoca fantasmas logocêntricos tradicionais (como a presença, a realidade e a verdade), que fazem eco do princípio de indissolubilidade entre palavra e significado, apenas para lograr esta identidade através de uma orientação textual para a marcação da ficcionalidade e consequente reforço dos efeitos de significação ilusórios. Auster altera os mecanismos logrando as expectativas do leitor em relação ao epílogo e à transparência textual, que um pacto mimético faria pressupor. O espaço vazio resultante confere ao texto uma liberdade plurissignificativa que dispersa todas as certezas e veicula o poder da criação do caos. Sob a aparência de fluência narrativa, a escrita de Paul Auster esconde uma subversão das premissas básicas da literatura realista e dos signos referenciais, numa poética auto-reflexiva ficcionalizada sobre a estruturação de universos imaginários.