927 resultados para Types of discourse. Writing production situations. Didactic collections
Resumo:
Coral reefs represent major accumulations of calcium carbonate (CaCO3). The particularly labyrinthine network of reefs in Torres Strait, north of the Great Barrier Reef (GBR), has been examined in order to estimate their gross CaCO3 productivity. The approach involved a two-step procedure, first characterising and classifying the morphology of reefs based on a classification scheme widely employed on the GBR and then estimating gross CaCO3 productivity rates across the region using a regional census-based approach. This was undertaken by independently verifying published rates of coral reef community gross production for use in Torres Strait, based on site-specific ecological and morphological data. A total of 606 reef platforms were mapped and classified using classification trees. Despite the complexity of the maze of reefs in Torres Strait, there are broad morphological similarities with reefs in the GBR. The spatial distribution and dimensions of reef types across both regions are underpinned by similar geological processes, sea-level history in the Holocene and exposure to the same wind/wave energetic regime, resulting in comparable geomorphic zonation. However, the presence of strong tidal currents flowing through Torres Strait and the relatively shallow and narrow dimensions of the shelf exert a control on local morphology and spatial distribution of the reef platforms. A total amount of 8.7 million tonnes of CaCO3 per year, at an average rate of 3.7 kg CaCO3 m-2 yr-1 (G), were estimated for the studied area. Extrapolated production rates based on detailed and regional census-based approaches for geomorphic zones across Torres Strait were comparable to those reported elsewhere, particularly values for the GBR based on alkalinity-reduction methods. However, differences in mapping methodologies and the impact of reduced calcification due to global trends in coral reef ecological decline and changing oceanic physical conditions warrant further research. The novel method proposed in this study to characterise the geomorphology of reef types based on classification trees provides an objective and repeatable data-driven approach that combined with regional census-based approaches has the potential to be adapted and transferred to different coral reef regions, depicting a more accurate picture of interactions between reef ecology and geomorphology.
Resumo:
Providing price incentives to farmers is usually considered essential for agricultural development. Although such incentives are important, regarding price as the sole explanatory factor is far from satisfactory in understanding the complex realities of agricultural production in Africa. By analyzing the share contracts widely practiced in Ghana, this article argues that local institutions such as land tenure systems and agrarian contracts provide strong incentives and disincentives for agricultural production. Based on data derived from fieldwork in the 1990s, the study analyzes two types of share contracts and the incentive structures embedded in them. The analysis reveals that farmers' investment behavior needs to be understood in terms of both short-term incentive to increase yield and long-term incentive to strengthen land rights. The study concludes that the role of price incentives in agricultural production needs to be reconsidered by placing it in wider incentive structures embedded in local institutions.
Resumo:
This paper is an empirical investigation of the relationship between exchange rate volatility and international trade, focusing on East Asia. It finds that intra-East Asian trade is discouraged by exchange rate volatility more seriously than trade in other regions because intermediate goods trade in production networks, which is quite sensitive to exchange rate volatility compared with other types of trade, occupies a significant fraction of trade. In addition, this negative effect of volatility is mainly induced by the unanticipated volatility and has an even greater impact than that of tariffs.
Resumo:
This paper examines the effects of preferential trade agreements (PTAs) in facilitating international trade flows connecting production networks. We consider over 250 PTAs with trade flows distinguished into parts and components and final goods for the period 1979-2008. The gravity equation estimates suggest that the concurrent year effects of PTA formation on trade in parts and components are unseen, whereas PTAs have positive and pervasive effects on both types of trade flows 6 and 9 years after the PTA formation.
Resumo:
A Probabilistic Safety Assessment (PSA) is being developed for a steam-methane reforming hydrogen production plant linked to a High-Temperature Gas Cooled Nuclear Reactor (HTGR). This work is based on the Japan Atomic Energy Research Institute’s (JAERI) High Temperature Test Reactor (HTTR) prototype in Japan. This study has two major objectives: calculate the risk to onsite and offsite individuals, and calculate the frequency of different types of damage to the complex. A simplified HAZOP study was performed to identify initiating events, based on existing studies. The initiating events presented here are methane pipe break, helium pipe break, and PPWC heat exchanger pipe break. Generic data was used for the fault tree analysis and the initiating event frequency. Saphire was used for the PSA analysis. The results show that the average frequency of an accident at this complex is 2.5E-06, which is divided into the various end states. The dominant sequences result in graphite oxidation which does not pose a health risk to the population. The dominant sequences that could affect the population are those that result in a methane explosion and occur 6.6E-8/year, while the other sequences are much less frequent. The health risk presents itself if there are people in the vicinity who could be affected by the explosion. This analysis also demonstrates that an accident in one of the plants has little effect on the other. This is true given the design base distance between the plants, the fact that the reactor is underground, as well as other safety characteristics of the HTGR. Sensitivity studies are being performed in order to determine where additional and improved data is needed.
Resumo:
The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.
Resumo:
Babassu and camelina oils have been transesterified with methanol by the classical homogeneous basic catalysis method with good yields. The babassu fatty acid methyl ester (FAME) has been subjected to fractional distillation at vacuum, and the low boiling point fraction has been blended with two types of fossil kerosene, a straight-run atmospheric distillation cut (hydrotreated) and a commercial Jet-A1. The camelina FAME has been blended with the fossil kerosene without previous distillation. The blends of babassu biokerosene and Jet-A1 have met some of the specifications selected for study of the ASTM D1655 standard: smoke point, density, flash point, cloud point, kinematic viscosity, oxidative stability and lower heating value. On the other hand, the blends of babassu biokerosene and atmospheric distillation cut only have met the density parameter and the oxidative stability. The blends of camelina FAME and atmospheric distillation cut have met the following specifications: density, kinematic viscosity at −20 °C, and lower heating value. With these preliminary results, it can be concluded that it would be feasible to blend babassu and camelina biokerosenes prepared in this way with commercial Jet-A1 up to 10 vol % of the former, if these blends prove to accomplish all the ASTM D1655-09 standards.
Resumo:
This paper analyzes the relationship among research collaboration, number of documents and number of citations of computer science research activity. It analyzes the number of documents and citations and how they vary by number of authors. They are also analyzed (according to author set cardinality) under different circumstances, that is, when documents are written in different types of collaboration, when documents are published in different document types, when documents are published in different computer science subdisciplines, and, finally, when documents are published by journals with different impact factor quartiles. To investigate the above relationships, this paper analyzes the publications listed in the Web of Science and produced by active Spanish university professors between 2000 and 2009, working in the computer science field. Analyzing all documents, we show that the highest percentage of documents are published by three authors, whereas single-authored documents account for the lowest percentage. By number of citations, there is no positive association between the author cardinality and citation impact. Statistical tests show that documents written by two authors receive more citations per document and year than documents published by more authors. In contrast, results do not show statistically significant differences between documents published by two authors and one author. The research findings suggest that international collaboration results on average in publications with higher citation rates than national and institutional collaborations. We also find differences regarding citation rates between journals and conferences, across different computer science subdisciplines and journal quartiles as expected. Finally, our impression is that the collaborative level (number of authors per document) will increase in the coming years, and documents published by three or four authors will be the trend in computer science literature.
Resumo:
Biodiesel is currently produced from a catalytic transesterification reaction of various types of edible and non-edible oil with methanol. The use of waste animal tallow instead of edible oils opens a route to recycle this waste. This material has the advantage of lower costs but the problem of high content of free fatty acids, becoming necessary a pre-esterification reaction that increases the cost of the catalytic process. The production of biodiesel using supercritical alcohols is appropriate for materials with high acidity and water content, therefore the use of this process with animal fat is a promising alternative. Ethanol has been used because it can be produced from biomass via fermentation resulting in a complete renewable biodiesel, instead of methanol that derives from fossil feedstocks. Two different processes have been studied: first, the direct transesterification of animal fat using supercritical ethanol and second a two-step process where the first step is a hydrolysis of the animal fat and the second step is the esterification of the resulting fatty acids. The temperature, the molar ratio ethanol:fat and the time have been modified in the different reactions to study the effect in the final conversion and the degradation of the unsaturated fatty acid esters, main inconvenient of these high temperature and pressure processes.
Resumo:
Hyaluronan (HA) plays an important role in lung pathophysiology. For this reason it has attracted great attention both as active ingredient and as excipient in treating lung diseases by direct pulmonary HA administration. The aim was the production of highly respirable and flowable HA powders either as a potential carrier for drug delivery or for being delivered directly by inhalation. Engineered sodium hyaluronate powders were produced by spray-drying technique. All the spray-dried powders were characterised in terms of particle size distribution, drug content, morphology and in vitro respirability. HA was successfully formulated with salbutamol sulphate in combination with leucine and highlighted remarkable aerodynamic performance (emitted dose equal to 83 % and FPF % equal to 97.1%). Moreover, HA colloidal solutions were designed and they were spray-dried. In order to improve particle aerodynamic characteristics, different types of excipients were investigated. In particular, stearylamine (5% w/w) allowed to obtain the best performance throughout the experimental set. Finally, in vitro biocompatibility was carried out by MTT assay and High Content Analysis for selected dry powder formulations and starting materials. The assays demonstrated the same outcome by confirming the HA biocompatibility and by producing the same rank of toxicity for the surfactants. The general conclusion of the project is that formulation containing HA and stearyl alcohol represents the best performing formulation.
Resumo:
Mutual support is an interactional communication process. Taking an interactional approach to support requires group participants be viewed not only as targets and recipients but also as sources and providers of various types of support. An analysis was performed on the interactions of a group listserv and model of online interactional support. The aim was to explore the communication process children follow. The analysis revealed self-disclosure was used in the support group in three distinct ways. Its function for the support recipient is to initiate a transactional relationship with another member for the purpose of attracting social support through the open expression of concerns and frustrations. It is then used by the support provider to demonstrate that coping is possible for the recipient through the reciprocal self-disclosure of similar concerns and situations with which the member has successfully dealt. The third use of self-disclosure was to share reciprocal social companionship relationships.
Resumo:
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.
Resumo:
This study uses a purpose-built corpus to explore the linguistic legacy of Britain’s maritime history found in the form of hundreds of specialised ‘Maritime Expressions’ (MEs), such as TAKEN ABACK, ANCHOR and ALOOF, that permeate modern English. Selecting just those expressions commencing with ’A’, it analyses 61 MEs in detail and describes the processes by which these technical expressions, from a highly specialised occupational discourse community, have made their way into modern English. The Maritime Text Corpus (MTC) comprises 8.8 million words, encompassing a range of text types and registers, selected to provide a cross-section of ‘maritime’ writing. It is analysed using WordSmith analytical software (Scott, 2010), with the 100 million-word British National Corpus (BNC) as a reference corpus. Using the MTC, a list of keywords of specific salience within the maritime discourse has been compiled and, using frequency data, concordances and collocations, these MEs are described in detail and their use and form in the MTC and the BNC is compared. The study examines the transformation from ME to figurative use in the general discourse, in terms of form and metaphoricity. MEs are classified according to their metaphorical strength and their transference from maritime usage into new registers and domains such as those of business, politics, sports and reportage etc. A revised model of metaphoricity is developed and a new category of figurative expression, the ‘resonator’, is proposed. Additionally, developing the work of Lakov and Johnson, Kovesces and others on Conceptual Metaphor Theory (CMT), a number of Maritime Conceptual Metaphors are identified and their cultural significance is discussed.
Resumo:
A közjavak közgazdaságilag optimális szintjének előállítása piaci körülmények között általában nehézségekbe ütközik. Napjainkban számos országban többféle közjószág áll rendelkezésre, a szakirodalomban pedig például a nyugdíjrendszer egyes elemeinek közjószágjellegével kapcsolatos felvetések is megjelentek már. Ezzel összefüggésben is érdekes a kérdés, milyen körülmények között fordulhat elő, hogy racionális egyéni döntéshozók egyénileg optimális döntéseikkel valamely közjószág optimális szintjét hozzák létre. Jelen tanulmány ezzel a kérdéssel foglalkozik. / === / It is difficult to produce the economically optimal level of public goods in a market environment. There are many different types of public goods, and today even some aspects of the pension system are considered as such. Still, it is an interesting question if individually optimal decisions made by rational individuals could lead to an optimal level of public goods. The paper attempts to analyze this question.
Resumo:
This case study follows eleven non-English speaking students as they adapt to community college, content courses. The three classes examined are required freshman classes--Humanities, Social Environment, and Individual in Transition. In order to cope with the demands of these classes, students must penetrate the academic discourse community and have effective relationships with their instructors and their peers. The results of the study are based on interviews with eleven non-native speaking (NNS) students and their instructors and on an analysis of student writing assignments, course syllabi, and exams. Three general areas are examined: (a) students' first-language (L$\sb1$) education, (b) the requirements of their content classes, and (c) the affective factors which influence their adaptation process.^ The case of these students reveals that: (1) Students draw on their L$\sb1$ education, especially in terms of content, as they cope with the demands of these content classes. (2) In some areas L$\sb1$ educational experiences interfere with students' ability to adapt. (3) The content classes require students to have well developed reading, writing, oral, and aural skills. (4) Students must use higher level cognitive skills to be successful in content classes. (5) Affective factors play a role in students' success in content classes. The discussion section includes possible implications of this data for college level English as a Second Language courses. ^