894 resultados para Moduli in modern mapping theory
Resumo:
This paper sets out to report on findings about features of task-specific reformulation observed in university students in the middle stretch of the Psychology degree course (N=58) and in a reference group of students from the degree courses in Modern Languages, Spanish and Library Studies (N=33) from the National University of La Plata (Argentina). Three types of reformulation were modeled: summary reformulation, comprehensive and productive reformulation.The study was based on a corpus of 621 reformulations rendered from different kinds of text. The versions obtained were categorised according to the following criteria: presence or absence of normative, morphosyntactic and semantic difficulties. Findings show that problems arise particularly with paraphrase and summary writing. Observation showed difficulties concerning punctuation, text cohesion and coherence , and semantic distortion or omission as regards extracting and/or substituting gist, with limited lexical resources and confusion as to suitability of style/register in writing. The findings in this study match those of earlier, more comprehensive research on the issue and report on problems experienced by a significant number of university students when interacting with both academic texts and others of a general nature. Moreover, they led to questions, on the one hand, as to the nature of such difficulties, which appear to be production-related problems and indirectly account for inadequate text comprehension, and on the other hand, as to the features of university tuition when it comes to text handling.
Resumo:
We examine the link between organic matter degradation, anaerobic methane oxidation (AMO), and sulfate depletion and explore how these processes potentially influence dolomitization. We determined rates and depths of AMO and dolomite formation for a variety of organic-rich sites along the west African Margin using data from Ocean Drilling Program (ODP) Leg 175. Rates of AMO are calculated from the diffusive fluxes of CH4 and SO4, and rates of dolomite formation are calculated from the diffusive flux of Mg. We find that the rates of dolomite formation are relatively constant regardless of the depth at which it is forming, indicating that the diffusive fluxes of Mg and Ca are not limiting. Based upon the calculated log IAP values, log K(sp) values for dolomite were found to narrowly range between -16.1 and -16.4. Dolomite formation is controlled in part by competition between AMO and methanogenesis, which controls the speciation of dissolved CO2. AMO increases the concentration of CO3[2-] through sulfate reduction, favoring dolomite formation, while methanogenesis increases the pCO2 of the pore waters, inhibiting dolomite formation. By regulating the pCO2 and alkalinity, methanogenesis and AMO can regulate the formation of dolomite in organic-rich marine sediments. In addition to providing a mechanistic link between AMO and dolomite formation, our findings provide a method by which the stability constant of dolomite can be calculated in modern sediments and allow prediction of regions and depth domains in which dolomite may be forming.
Resumo:
Lower ocean crust is primarily gabbroic, although 1-2% felsic igneous rocks that are referred to collectively as plagiogranites occur locally. Recent experimental evidence suggests that plagiogranite magmas can form by hydrous partial melting of gabbro triggered by seawater-derived fluids, and thus they may indicate early, high-temperature hydrothermal fluid circulation. To explore seawater-rock interaction prior to and during the genesis of plagiogranite and other late-stage magmas, oxygen-isotope ratios preserved in igneous zircon have been measured by ion microprobe. A total of 197 zircons from 43 plagiogranite, evolved gabbro, and hydrothermally altered fault rock samples have been analyzed. Samples originate primarily from drill core acquired during Ocean Drilling Program and Integrated Ocean Drilling Program operations near the Mid-Atlantic and Southwest Indian Ridges. With the exception of rare, distinctively luminescent rims, all zircons from ocean crust record remarkably uniform d18O with an average value of 5.2 ± 0.5 per mil (2SD). The average d18O(Zrc) would be in magmatic equilibrium with unaltered MORB [d18O(WR) ~5.6-5.7 per mil], and is consistent with the previously determined value for equilibrium with the mantle. The narrow range of measured d18O values is predicted for zircon crystallization from variable parent melt compositions and temperatures in a closed system, and provides no indication of any interactions between altered rocks or seawater and the evolved parent melts. If plagiogranite forms by hydrous partial melting, the uniform mantle-like d18O(Zrc) requires melting and zircon crystallization prior to significant amounts of water-rock interactions that alter the protolith d18O. Zircons from ocean crust have been proposed as a tectonic analog for >3.9 Ga detrital zircons from the earliest (Hadean) Earth by multiple workers. However, zircons from ocean crust are readily distinguished geochemically from zircons formed in continental crustal environments. Many of the >3.9 Ga zircons have mildly elevated d18O (6.0-7.5 per mil), but such values have not been identified in any zircons from the large sample suite examined here. The difference in d18O, in combination with newly acquired lithium concentrations and published trace element data, clearly shows that the >3.9 Ga detrital zircons did not originate by processes analogous to those in modern mid-ocean ridge settings.
Resumo:
Vast areas on the Tibetan Plateau are covered by alpine sedge mats consisting of different species of the genus Kobresia. These mats have topsoil horizons rich in rhizogenic organic matter which creates turfs. As the turfs have recently been affected by a complex destruction process, knowledge concerning their soil properties, age and pedogenesis are needed. In the core area of Kobresia pygmaea mats around Nagqu (central Tibetan Plateau, ca. 4500 m a.s.l.), four profiles were subjected to pedological, paleobotanical and geochronological analyses concentrating on soil properties, phytogenic composition and dating of the turf. The turf of both dry K. pygmaea sites and wet Kobresia schoenoides sites is characterised by an enrichment of living (dominant portion) and dead root biomass. In terms of humus forms, K. pygmaea turfs can be classified as Rhizomulls mainly developed from Cambisols. Wet-site K. schoenoides turfs, however, can be classified as Rhizo-Hydromors developed from Histic Gleysols. At the dry sites studied, the turnover of soil organic matter is controlled by a non-permafrost cold thermal regime. Below-ground remains from sedges are the most frequent macroremains in the turf. Only a few pollen types of vascular plants occur, predominantly originating from sedges and grasses. Large amounts of microscopic charcoal (indeterminate) are present. Macroremains and pollen extracted from the turfs predominantly have negative AMS 14C ages, giving evidence of a modern turf genesis. Bulk-soil datings from the lowermost part of the turfs have a Late Holocene age comprising the last ca. 2000 years. The development of K. pygmaea turfs was most probably caused by an anthropo(zoo)-genetically initiated growth of sedge mats replacing former grass-dominated vegetation ('steppe'). Thus the turfs result from the transformation of pre-existing topsoils comprising a secondary penetration and accumulation of roots. K. schoenoides turfs, however, are characterised by a combined process of peat formation and penetration/accumulation of roots probably representing a (quasi) natural wetland vegetation.
Uranium and radioactive isotopes in bottom sediments and Fe-Mn nodules and crusts of seas and oceans
Resumo:
The main stages of the sedimentary cycle of uranium in modern marine basins are under consideration in the book. Annually about 18 thousand tons of dissolved and suspended uranium enters the ocean with river runoff. Depending on a type of a marine basin uranium accumulated either in sediments of deep-sea basins, or in sediments of continental shelves and slopes. In the surface layer of marine sediments hydrogenic uranium is predominantly bound with organic matter, and in ocean sediments also with iron, manganese and phosphorus. In diagenetic processes there occurs partial redistribution of uranium in sediments, as well as its concentration in iron-manganese, phosphate and carbonate nodules and biogenic phosphate detritus. Concentration of uranium in marine sediments of various types depending on their composition, as well as on forms of its entering, degree of differentiation and of sedimentation rates, on hydrochemical regime and water circulation, and on intensity of diagenetic processes.
Resumo:
Shell chemistry of planktic foraminifera and the alkenone unsaturation index in 69 surface sediment samples in the tropical eastern Indian Ocean off West and South Indonesia were studied. Results were compared to modern hydrographic data in order to assess how modern environmental conditions are preserved in sedimentary record, and to determine the best possible proxies to reconstruct seasonality, thermal gradient and upper water column characteristics in this part of the world ocean. Our results imply that alkenone-derived temperatures record annual mean temperatures in the study area. However, this finding might be an artifact due to the temperature limitation of this proxy above 28°C. Combined study of shell stable oxygen isotope and Mg/Ca ratio of planktic foraminifera suggests that Globigerinoides ruber sensu stricto (s.s.), G. ruber sensu lato (s.l.), and G. sacculifer calcify within the mixed-layer between 20 m and 50 m, whereas Globigerina bulloides records mixed-layer conditions at ~50 m depth during boreal summer. Mean calcifications of Pulleniatina obliquiloculata, Neogloboquadrina dutertrei, and Globorotalia tumida occur at the top of the thermocline during boreal summer, at ~75 m, 75-100 m, and 100 m, respectively. Shell Mg/Ca ratios of all species show a significant correlation with temperature at their apparent calcification depths and validate the application of previously published temperature calibrations, except for G. tumida that requires a regional Mg/Ca-temperature calibration (Mg/Ca = 0.41 exp (0.068*T)). We show that the difference in Mg/Ca-temperatures of the mixed-layer species and the thermocline species, particularly between G. ruber s.s. (or s.l.) and P. obliquiloculata, can be applied to track changes in the upper water column stratification. Our results provide critical tools for reconstructing past changes in the hydrography of the study area and their relation to monsoon, El Niño-Southern Oscillation, and the Indian Ocean Dipole Mode.
Resumo:
Three ice type regimes at Ice Station Belgica (ISB), during the 2007 International Polar Year SIMBA (Sea Ice Mass Balance in Antarctica) expedition, were characterized and assessed for elevation, snow depth, ice freeboard and thickness. Analyses of the probability distribution functions showed great potential for satellite-based altimetry for estimating ice thickness. In question is the required altimeter sampling density for reasonably accurate estimation of snow surface elevation given inherent spatial averaging. This study assesses an effort to determine the number of laser altimeter 'hits' of the ISB floe, as a representative Antarctic floe of mixed first- and multi-year ice types, for the purpose of statistically recreating the in situ-determined ice-thickness and snow depth distribution based on the fractional coverage of each ice type. Estimates of the fractional coverage and spatial distribution of the ice types, referred to as ice 'towns', for the 5 km**2 floe were assessed by in situ mapping and photo-visual documentation. Simulated ICESat altimeter tracks, with spot size ~70 m and spacing ~170 m, sampled the floe's towns, generating a buoyancy-derived ice thickness distribution. 115 altimeter hits were required to statistically recreate the regional thickness mean and distribution for a three-town assemblage of mixed first- and multi-year ice, and 85 hits for a two-town assemblage of first-year ice only: equivalent to 19.5 and 14.5 km respectively of continuous altimeter track over a floe region of similar structure. Results have significant implications toward model development of sea-ice sampling performance of the ICESat laser altimeter record as well as maximizing sampling characteristics of satellite/airborne laser and radar altimetry missions for sea-ice thickness.
Resumo:
Accumulation of an intracellular pool of carbon (C(i) pool) is one strategy by which marine algae overcome the low abundance of dissolved CO2 (CO2 (aq) ) in modern seawater. To identify the environmental conditions under which algae accumulate an acid-labile C(i) pool, we applied a (14) C pulse-chase method, used originally in dinoflagellates, to two new classes of algae, coccolithophorids and diatoms. This method measures the carbon accumulation inside the cells without altering the medium carbon chemistry or culture cell density. We found that the diatom Thalassiosira weissflogii [(Grunow) G. Fryxell & Hasle] and a calcifying strain of the coccolithophorid Emiliania huxleyi [(Lohmann) W. W. Hay & H. P. Mohler] develop significant acid-labile C(i) pools. C(i) pools are measureable in cells cultured in media with 2-30 µmol/l CO2 (aq), corresponding to a medium pH of 8.6-7.9. The absolute C(i) pool was greater for the larger celled diatoms. For both algal classes, the C(i) pool became a negligible contributor to photosynthesis once CO2 (aq) exceeded 30 µmol/l. Combining the (14) C pulse-chase method and (14) C disequilibrium method enabled us to assess whether E. huxleyi and T. weissflogii exhibited thresholds for foregoing accumulation of DIC or reduced the reliance on bicarbonate uptake with increasing CO2 (aq) . We showed that the C(i) pool decreases with higher CO2 :HCO3 (-) uptake rates.
Resumo:
Recently, steady economic growth rates have been kept in Poland and Hungary. Money supplies are growing rather rapidly in these economies. In large, exchange rates have trends of depreciation. Then, exports and prices show the steady growth rates. It can be thought that per capita GDPs are in the same level and development stages are similar in these two countries. It is assumed that these two economies have the same export market and export goods are competing in it. If one country has an expansion of monetary policy, price increase and interest rate decrease. Then, exchange rate decrease. Exports and GDP will increase through this phenomenon. At the same time, this expanded monetary policy affects another country through the trade. This mutual relationship between two countries can be expressed by the Nash-equilibrium in the Game theory. In this paper, macro-econometric models of Polish and Hungarian economies are built and the Nash- equilibrium is introduced into them.
Resumo:
Species selection for forest restoration is often supported by expert knowledge on local distribution patterns of native tree species. This approach is not applicable to largely deforested regions unless enough data on pre-human tree species distribution is available. In such regions, ecological niche models may provide essential information to support species selection in the framework of forest restoration planning. In this study we used ecological niche models to predict habitat suitability for native tree species in "Tierra de Campos" region, an almost totally deforested area of the Duero Basin (Spain). Previously available models provide habitat suitability predictions for dominant native tree species, but including non-dominant tree species in the forest restoration planning may be desirable to promote biodiversity, specially in largely deforested areas were near seed sources are not expected. We used the Forest Map of Spain as species occurrence data source to maximize the number of modeled tree species. Penalized logistic regression was used to train models using climate and lithological predictors. Using model predictions a set of tools were developed to support species selection in forest restoration planning. Model predictions were used to build ordered lists of suitable species for each cell of the study area. The suitable species lists were summarized drawing maps that showed the two most suitable species for each cell. Additionally, potential distribution maps of the suitable species for the study area were drawn. For a scenario with two dominant species, the models predicted a mixed forest (Quercus ilex and a coniferous tree species) for almost one half of the study area. According to the models, 22 non-dominant native tree species are suitable for the study area, with up to six suitable species per cell. The model predictions pointed to Crataegus monogyna, Juniperus communis, J.oxycedrus and J.phoenicea as the most suitable non-dominant native tree species in the study area. Our results encourage further use of ecological niche models for forest restoration planning in largely deforested regions.
Resumo:
The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.
Resumo:
The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.