886 resultados para rhetorical structure theory
Resumo:
Martin Heidegger is generally regarded as one of the most significant—if also the most controversial—philosophers of the 20th century. Most scholarly engagement with Heidegger’s thought on Modernity approaches his work with a special focus on either his critique of technology, or on his more general critique of subjectivity. This dissertation project attempts to elucidate Martin Heidegger’s diagnosis of modernity, and, by extension, his thought as a whole, from the neglected standpoint of his understanding of mathematics, which he explicitly identifies as the essence of modernity.
Accordingly, our project attempts to work through the development of Modernity, as Heidegger understands it, on the basis of what we call a “mathematical dialectic.“ The basis of our analysis is that Heidegger’s understanding of Modernity, both on its own terms and in the context of his theory of history [Seinsgeschichte], is best understood in terms of the interaction between two essential, “mathematical” characteristics, namely, self-grounding and homogeneity. This project first investigates the mathematical qualities of these components of Modernity individually, and then attempts to trace the historical and philosophical development of Modernity on the basis of the interaction between these two components—an interaction that is, we argue, itself regulated by the structure of the mathematical, according to Heidegger’s understanding of the term.
The project undertaken here intends not only to serve as an interpretive, scholarly function of elucidating Heidegger’s understanding of Modernity, but also to advance the larger aim of defending the prescience, structural coherence, and relevance of Heidegger’s diagnosis of Modernity as such.
Resumo:
Into the Bends of Time is a 40-minute work in seven movements for a large chamber orchestra with electronics, utilizing real-time computer-assisted processing of music performed by live musicians. The piece explores various combinations of interactive relationships between players and electronics, ranging from relatively basic processing effects to musical gestures achieved through stages of computer analysis, in which resulting sounds are crafted according to parameters of the incoming musical material. Additionally, some elements of interaction are multi-dimensional, in that they rely on the participation of two or more performers fulfilling distinct roles in the interactive process with the computer in order to generate musical material. Through processes of controlled randomness, several electronic effects induce elements of chance into their realization so that no two performances of this work are exactly alike. The piece gets its name from the notion that real-time computer-assisted processing, in which sound pressure waves are transduced into electrical energy, converted to digital data, artfully modified, converted back into electrical energy and transduced into sound waves, represents a “bending” of time.
The Bill Evans Trio featuring bassist Scott LaFaro and drummer Paul Motian is widely regarded as one of the most important and influential piano trios in the history of jazz, lauded for its unparalleled level of group interaction. Most analyses of Bill Evans’ recordings, however, focus on his playing alone and fail to take group interaction into account. This paper examines one performance in particular, of Victor Young’s “My Foolish Heart” as recorded in a live performance by the Bill Evans Trio in 1961. In Part One, I discuss Steve Larson’s theory of musical forces (expanded by Robert S. Hatten) and its applicability to jazz performance. I examine other recordings of ballads by this same trio in order to draw observations about normative ballad performance practice. I discuss meter and phrase structure and show how the relationship between the two is fixed in a formal structure of repeated choruses. I then develop a model of perpetual motion based on the musical forces inherent in this structure. In Part Two, I offer a full transcription and close analysis of “My Foolish Heart,” showing how elements of group interaction work with and against the musical forces inherent in the model of perpetual motion to achieve an unconventional, dynamic use of double-time. I explore the concept of a unified agential persona and discuss its role in imparting the song’s inherent rhetorical tension to the instrumental musical discourse.
Resumo:
As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.
In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.
Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.
In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.
Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.
Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.
Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.
Resumo:
Cette thèse présente une théorie de la fonction formelle et de la structure des phrases dans la musique contemporaine, théorie qui peut être utilisée aussi bien comme outil analytique que pour créer de nouvelles œuvres. Deux concepts théoriques actuels aident à clarifier la structure des phrases : les projections temporelles de Christopher Hasty et la théorie des fonctions formelles de William Caplin, qui inclut le concept de l’organisation formelle soudée versus lâche (tight-knit vs. loose). Les projections temporelles sont perceptibles grâce à l’accent mis sur les paramètres secondaires, comme le style du jeu, l’articulation et le timbre. Des sections avec une organisation formelle soudée ont des projections temporelles claires, qui sont créées par la juxtaposition des motifs distincts, généralement sous la forme d'une idée de base en deux parties. Ces projections organisent la musique en phrases de présentation, en phrases de continuité et finalement, à des moments formels charnières, en phrases cadentielles. Les sections pourvues d’une organisation plus lâche tendent à présenter des projections et mouvements harmoniques moins clairs et moins d’uniformité motivique. La structure des phrases de trois pièces tardives pour instrument soliste de Pierre Boulez est analysée : Anthèmes I pour violon (1991-1992) et deux pièces pour piano, Incises (2001) et une page d’éphéméride (2005). Les idées proposées dans le présent document font suite à une analyse de ces œuvres et ont eu une forte influence sur mes propres compositions, en particulier Lucretia Overture pour orchestre et 4 Impromptus pour flûte, saxophone soprano et piano, qui sont également analysés en détail. Plusieurs techniques de composition supplémentaires peuvent être discernés dans ces deux œuvres, y compris l'utilisation de séquence mélodiques pour contrôler le rythme harmonique; des passages composés de plusieurs couches musicales chacun avec un structure de phrase distinct; et le relâchement de l'organisation formelle de matériels récurrents. Enfin, la composition de plusieurs autres travaux antérieurs a donné lieu à des techniques utilisées dans ces deux œuvres et ils sont brièvement abordés dans la section finale.
Resumo:
Cette thèse propose l’émergence d’une poésie de l’entre deux dans la littérature expérimentale, en suivant ses développements du milieu du vingtième siècle jusqu'au début du vingt-et-unième. Cette notion d’entre-deux poétique se fonde sur une théorie du neutre (Barthes, Blanchot) comme ce qui se situe au delà ou entre l'opposition et la médiation. Le premier chapitre retrace le concept de monotonie dans la théorie esthétique depuis la période romantique où il est vu comme l'antithèse de la variabilité ou tension poétique, jusqu’à l’émergence de l’art conceptuel au vingtième siècle où il se déploie sans interruption. Ce chapitre examine alors la relation de la monotonie à la mélancolie à travers l’analyse de « The Anatomy of Monotony », poème de Wallace Stevens tiré du recueil Harmonium et l’œuvre poétique alphabet de Inger Christensen. Le deuxième chapitre aborde la réalisation d’une poésie de l’entre-deux à travers une analyse de quatre œuvres poétiques qui revisitent l’usage de l’index du livre paratextuel: l’index au long poème “A” de Louis Zukofsky, « Index to Shelley's Death » d’Alan Halsey qui apparait à la fin de l’oeuvre The Text of Shelley's Death, Cinema of the Present de Lisa Robertson, et l’oeuvre multimédia Via de Carolyn Bergvall. Le troisième chapitre retrace la politique de neutralité dans la théorie de la traduction. Face à la logique oppositionnelle de l’original contre la traduction, il propose hypothétiquement la réalisation d’une troisième texte ou « l’entre-deux », qui sert aussi à perturber les récits familiers de l’appropriation, l’absorption et l’assimilation qui effacent la différence du sujet de l’écrit. Il examine l’oeuvre hybride Secession with Insecession de Chus Pato et Erin Moure comme un exemple de poésie de l’entre-deux. A la fois pour Maurice Blanchot et Roland Barthes, le neutre représente un troisième terme potentiel qui défie le paradigme de la pensée oppositionnelle. Pour Blanchot, le neutre est la différence amenée au point de l’indifférence et de l’opacité de la transparence tandis que le désire de Barthes pour le neutre est une utopie lyrique qui se situe au-delà des contraintes de but et de marquage. La conclusion examine comment le neutre correspond au conditions de liberté gouvernant le principe de créativité de la poésie comme l’acte de faire sans intention ni raison.
Resumo:
Les délinquants sexuels sadiques sont généralement décrits comme une entité clinique particulière commettant des délits graves. Or, la notion même de sadisme sexuel pose un nombre important de problèmes. Parmi ceux-ci, on retrouve des problèmes de validité et de fidélité. Perçu comme une maladie dont on est atteint ou pas, le sadisme a été étudié comme si les sadiques étaient fondamentalement différents. À l’heure actuelle, plusieurs travaux laissent croire que la majorité des troubles psychologiques se présentent comme une différence d'intensité (dimension) plutôt qu’une différence de nature (taxon). Même si la conception médicale prévaut encore en ce qui concerne le sadisme sexuel, plusieurs évoquent l’idée qu’il pourrait être mieux conceptualisé à l’aide d’une approche dimensionnelle. En parallèle, nos connaissances sur les facteurs contributifs au développement du sadisme sexuel sont limitées et reposent sur de faibles appuis empiriques. Jusqu'à présent, très peu d'études se sont intéressées aux facteurs menant au développement du sadisme sexuel et encore moins ont tenté de valider leurs théories. En outre, nos connaissances proviennent majoritairement d'études de cas portant sur les meurtriers sexuels, un sous-groupe très particulier de délinquants fréquemment motivé par des intérêts sexuels sadiques. À notre connaissance, aucune étude n'a proposé jusqu'à présent de modèle développemental portant spécifiquement sur le sadisme sexuel. Pourtant, l'identification de facteurs liés au développement du sadisme sexuel est essentielle dans notre compréhension ainsi que dans l'élaboration de stratégie d'intervention efficace. La présente thèse s'inscrit dans un contexte visant à clarifier le concept de sadisme sexuel. Plus spécialement, nous nous intéressons à sa structure latente, à sa mesure et à ses origines développementales. À partir d'un échantillon de 514 délinquants sexuels évalué au Massachusetts Treatment Center, la viabilité d’une conception dimensionnelle du sadisme sexuel sera mise à l’épreuve à l'aide d'analyses taxométriques permettant d'étudier la structure latente d'un construit. Dans une seconde étape, à l'aide d'analyses de Rasch et d'analyses appartenant aux théories de la réponse à l'item à deux paramètres, nous développerons la MTC Sadism Scale (MTCSS), une mesure dimensionnelle du sadisme sexuel. Dans une troisième et dernière étape, un modèle développemental sera élaboré à l'aide d'équations structurales. La présente thèse permettra de contribuer à la clarification du concept de sadisme sexuel. Une clarification de la structure latente et des facteurs développementaux permettra de saisir les devis de recherche les plus à même de capturer les aspects essentiels. En outre, ceci permettra d'identifier les facteurs pour lesquels une intervention est la plus appropriée pour réduire la récidive, ou la gravité de celle-ci.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
We present solutions of the Yang–Mills equation on cylinders R×G/HR×G/H over coset spaces of odd dimension 2m+12m+1 with Sasakian structure. The gauge potential is assumed to be SU(m)SU(m)-equivariant, parameterized by two real, scalar-valued functions. Yang–Mills theory with torsion in this setup reduces to the Newtonian mechanics of a point particle moving in R2R2 under the influence of an inverted potential. We analyze the critical points of this potential and present an analytic as well as several numerical finite-action solutions. Apart from the Yang–Mills solutions that constitute SU(m)SU(m)-equivariant instanton configurations, we construct periodic sphaleron solutions on S1×G/HS1×G/H and dyon solutions on iR×G/HiR×G/H.
Resumo:
This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed timevarying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible realtime term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.
Resumo:
We show that the multiscale entanglement renormalization ansatz (MERA) can be reformulated in terms of a causality constraint on discrete quantum dynamics. This causal structure is that of de Sitter space with a flat space-like boundary, where the volume of a spacetime region corresponds to the number of variational parameters it contains. This result clarifies the nature of the ansatz, and suggests a generalization to quantum field theory. It also constitutes an independent justification of the connection between MERA and hyperbolic geometry which was proposed as a concrete implementation of the AdS-CFT correspondence.
Resumo:
We study the relations of shift equivalence and strong shift equivalence for matrices over a ring $\mathcal{R}$, and establish a connection between these relations and algebraic K-theory. We utilize this connection to obtain results in two areas where the shift and strong shift equivalence relations play an important role: the study of finite group extensions of shifts of finite type, and the Generalized Spectral Conjectures of Boyle and Handelman for nonnegative matrices over subrings of the real numbers. We show the refinement of the shift equivalence class of a matrix $A$ over a ring $\mathcal{R}$ by strong shift equivalence classes over the ring is classified by a quotient $NK_{1}(\mathcal{R}) / E(A,\mathcal{R})$ of the algebraic K-group $NK_{1}(\calR)$. We use the K-theory of non-commutative localizations to show that in certain cases the subgroup $E(A,\mathcal{R})$ must vanish, including the case $A$ is invertible over $\mathcal{R}$. We use the K-theory connection to clarify the structure of algebraic invariants for finite group extensions of shifts of finite type. In particular, we give a strong negative answer to a question of Parry, who asked whether the dynamical zeta function determines up to finitely many topological conjugacy classes the extensions by $G$ of a fixed mixing shift of finite type. We apply the K-theory connection to prove the equivalence of a strong and weak form of the Generalized Spectral Conjecture of Boyle and Handelman for primitive matrices over subrings of $\mathbb{R}$. We construct explicit matrices whose class in the algebraic K-group $NK_{1}(\mathcal{R})$ is non-zero for certain rings $\mathcal{R}$ motivated by applications. We study the possible dynamics of the restriction of a homeomorphism of a compact manifold to an isolated zero-dimensional set. We prove that for $n \ge 3$ every compact zero-dimensional system can arise as an isolated invariant set for a homeomorphism of a compact $n$-manifold. In dimension two, we provide obstructions and examples.
Resumo:
A detailed non-equilibrium state diagram of shape-anisotropic particle fluids is constructed. The effects of particle shape are explored using Naive Mode Coupling Theory (NMCT), and a single particle Non-linear Langevin Equation (NLE) theory. The dynamical behavior of non-ergodic fluids are discussed. We employ a rotationally frozen approach to NMCT in order to determine a transition to center of mass (translational) localization. Both ideal and kinetic glass transitions are found to be highly shape dependent, and uniformly increase with particle dimensionality. The glass transition volume fraction of quasi 1- and 2- dimensional particles fall monotonically with the number of sites (aspect ratio), while 3-dimensional particles display a non-monotonic dependence of glassy vitrification on the number of sites. Introducing interparticle attractions results in a far more complex state diagram. The ideal non-ergodic boundary shows a glass-fluid-gel re-entrance previously predicted for spherical particle fluids. The non-ergodic region of the state diagram presents qualitatively different dynamics in different regimes. They are qualified by the different behaviors of the NLE dynamic free energy. The caging dominated, repulsive glass regime is characterized by long localization lengths and barrier locations, dictated by repulsive hard core interactions, while the bonding dominated gel region has short localization lengths (commensurate with the attraction range), and barrier locations. There exists a small region of the state diagram which is qualified by both glassy and gel localization lengths in the dynamic free energy. A much larger (high volume fraction, and high attraction strength) region of phase space is characterized by short gel-like localization lengths, and long barrier locations. The region is called the attractive glass and represents a 2-step relaxation process whereby a particle first breaks attractive physical bonds, and then escapes its topological cage. The dynamic fragility of fluids are highly particle shape dependent. It increases with particle dimensionality and falls with aspect ratio for quasi 1- and 2- dimentional particles. An ultralocal limit analysis of the NLE theory predicts universalities in the behavior of relaxation times, and elastic moduli. The equlibrium phase diagram of chemically anisotropic Janus spheres and Janus rods are calculated employing a mean field Random Phase Approximation. The calculations for Janus rods are corroborated by the full liquid state Reference Interaction Site Model theory. The Janus particles consist of attractive and repulsive regions. Both rods and spheres display rich phase behavior. The phase diagrams of these systems display fluid, macrophase separated, attraction driven microphase separated, repulsion driven microphase separated and crystalline regimes. Macrophase separation is predicted in highly attractive low volume fraction systems. Attraction driven microphase separation is charaterized by long length scale divergences, where the ordering length scale determines the microphase ordered structures. The ordering length scale of repulsion driven microphase separation is determined by the repulsive range. At the high volume fractions, particles forgo the enthalpic considerations of attractions and repulsions to satisfy hard core constraints and maximize vibrational entropy. This results in site length scale ordering in rods, and the sphere length scale ordering in Janus spheres, i.e., crystallization. A change in the Janus balance of both rods and spheres results in quantitative changes in spinodal temperatures and the position of phase boundaries. However, a change in the block sequence of Janus rods causes qualitative changes in the type of microphase ordered state, and induces prominent features (such as the Lifshitz point) in the phase diagrams of these systems. A detailed study of the number of nearest neighbors in Janus rod systems reflect a deep connection between this local measure of structure, and the structure factor which represents the most global measure of order.