655 resultados para recursive detrending


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A metamorfose, de acordo com Jean Chevalier e Alan Gheerbrant, é definida neste estudo como a transformação física e/ou comportamental de um ser em outro, sem a perda da identidade e ciência do primeiro ser. Esta transformação é um fenômeno recursivo em diversas mitologias e culturas. O presente trabalho tem por objetivo estabelecer, numa abordagem comparativa, as correlações e diferenças entre o tema da metamorfose recorrente nos mitos gregos relatados por Homero, em sua Odisseia, nos mitos gregos descrevidos pelo poeta latino Publius Ovidius Naso, conhecido como Ovídio, em sua obra Metamorfoses, nos cinco primeiros livros, e entre as narrativas orais que referem casos de metamorfoses ocorridos no município de Belém do Pará, inventariadas no período de 1994 a 2004. Foram consideradas as obras Odisseia e Metamorfoses por serem ambas, respectivamente, expoentes da literatura ocidental de uma Grécia dos séculos VIII a VII a.C e de uma Grécia do século I d.C retratada pelo poeta latino Ovídio, e que carregam o tema da metamorfose. Isto porque o estudo prévio ratifica a formação de índices míticos não somente nas narrativas da mitologia grega, mas também nos casos de metamorfoses oriundos de Belém. Em todo o caso, nota-se a configuração espaço-temporal como entidades que sedimentam e organizam o mundo mítico, articulando tais dimensões a representações no mundo físico-espiritual. O tema da metamorfose, contudo, é conformado de forma diferenciada, conforme o contexto histórico-cultural de cada narrativa, o que é refletido na multiplicidade de símbolos e sentidos perseguidos por cada narrativa. A fim de enriquecer o estudo dos símbolos e do contexto histórico-geográfico dos mitos gregos abordados, utilizam-se como fonte complementar os manuais de Junito Brandão, a saber, a obra Mitologia de Junito Brandão, nos volumes I, II e III, bem como os dois volumes do Dicionário Mítico-Etimológico da Mitologia Grega. Para uma análise comparativa mais eficaz, precisou-se ir além do estudo contextual de produção e representação dos códigos subjacentes a cada narrativa, pois o mito, nas palavras de Ernest Cassirer, é experimentado na consciência, porém é anterior a ela; o homem vive o mito, logo, o mito é anterior ao homem, posto que à medida que toma consciência de sua existência e das relações que tece com o mundo, o homem se vale do mito para estabelecer relações de valor e sentido, bem como representações para singularizar suas experiências. Trata-se, portanto, de uma questão filosófica de vital importância, por isso, buscou-se, para este estudo lítero-narratológico, os fundamentos da Filosofia da mitologia, junto a considerações de uma Antropologia cultural, associado ao levantamento contextual-histórico do cosmo que constitui cada narrativa, a fim de lançar bases elucidativas sobre as relações do homem com seu mundo a partir de determinadas transformações. Sob este foco, diante da pesquisa prévia das narrativas que serão analisadas, percebeu-se que as metamorfoses apresentavam maiores ocorrências quando: 1) simbolizavam o mal na figura dos metamorfoseados; 2) apresentavam motivações de cunho sexual e 3) consistiam em explicações para acontecimentos do mundo físico-espiritual. Trata-se de uma divisão metodológica que objetiva viabilizar a organização e visualização do estudo comparado. Conclui-se, então, que além de possibilitar a leitura e o conhecimento dos mitos gregos e de relatos da Amazônia pelos símbolos constituídos na consciência mítica, este estudo pode servir como uma base para verificação do exercício literário da linguagem criadora por meio do narrar, bem como ampliar a compreensão do que seja e faz a consciência humana enquanto arrimo para a difusão de comportamentos e crenças compartilhados pelo indivíduo em sociedade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apresentamos aqui uma metodologia alternativa para modelagem de ferramentas de indução diretamente no domínio do tempo. Este trabalho consiste na solução da equação de difusão do campo eletromagnético através do método de diferenças finitas. O nosso modelo consiste de um meio estratificado horizontalmente, através do qual simulamos um deslocamento da ferramenta na direção perpendicular às interfaces. A fonte consiste de uma bobina excitada por uma função degrau de corrente e o registro do campo induzido no meio é feito através de uma bobina receptora localizada acima da bobina transmissora. Na solução da equação de difusão determinamos o campo primário e o campo secundário separadamente. O campo primário é obtido analiticamente e o campo secundário é determinado utilizando-se o método de Direção Alternada Implícita, resultando num sistema tri-diagonal que é resolvido através do método recursivo proposto por Claerbout. Finalmente, determina-se o valor máximo do campo elétrico secundário em cada posição da ferramenta ao longo da formação, obtendo-se assim uma perfilagem no domínio do tempo. Os resultados obtidos mostram que este método é bastante eficiente na determinação do contato entre camadas, inclusive para camadas de pequena espessura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carbon nanotubes have been at the forefront of nanotechnology, leading not only to a better understanding of the basic properties of charge transport in one dimensional materials, but also to the perspective of a variety of possible applications, including highly sensitive sensors. Practical issues, however, have led to the use of bundles of nanotubes in devices, instead of isolated single nanotubes. From a theoretical perspective, the understanding of charge transport in such bundles, and how it is affected by the adsorption of molecules, has been very limited, one of the reasons being the sheer size of the calculations. A frequent option has been the extrapolation of knowledge gained from single tubes to the properties of bundles. In the present work we show that such procedure is not correct, and that there are qualitative differences in the effects caused by molecules on the charge transport in bundles versus isolated nanotubes. Using a combination of density functional theory and recursive Green's function techniques we show that the adsorption of molecules randomly distributed onto the walls of carbon nanotube bundles leads to changes in the charge density and consequently to significant alterations in the conductance even in pristine tubes. We show that this effect is driven by confinement which is not present in isolated nanotubes. Furthermore, a low concentration of dopants randomly adsorbed along a two-hundred nm long bundle drives a change in the transport regime; from ballistic to diffusive, which can account for the high sensitivity to different molecules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use Hirota's method formulated as a recursive scheme to construct a complete set of soliton solutions for the affine Toda field theory based on an arbitrary Lie algebra. Our solutions include a new class of solitons connected with two different types of degeneracies encountered in Hirota's perturbation approach. We also derive an universal mass formula for all Hirota's solutions to the affine Toda model valid for all underlying Lie groups. Embedding of the affine Toda model in the conformal affine Toda model plays a crucial role in this analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Física - IFT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chain topology, including branch node, chain link and cross-link dynamics that contribute to the number of elastically active strands and junctions, are calculated using purely deterministic derivations. Solutions are not coupled to population density distributions. An eigenzeit transformation assists in the conversion of expressions derived by chemical reaction principles from time to conversion space, yielding transport phenomena type expressions where the rate of change in the molar concentrations of branch nodes with respect to conversion is expressed as functions of the fraction of reactive sites on precursors and reactants. Analogies are hypothesized to exist in cross-linking space that effectively distribute branch nodes with i reacted moieties between cross-links having j bonds extending to the gel. To obtain solutions, reacted sites on nodes or links with finite chain extensions are examined in terms of stoichiometry associated with covalent bonding. Solutions replicate published results based on Miller and Macosko’s recursive procedure and results obtained from truncated weighted sums of population density distributions as suggested by Flory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene has received great attention due to its exceptional properties, which include corners with zero effective mass, extremely large mobilities, this could render it the new template for the next generation of electronic devices. Furthermore it has weak spin orbit interaction because of the low atomic number of carbon atom in turn results in long spin coherence lengths. Therefore, graphene is also a promising material for future applications in spintronic devices - the use of electronic spin degrees of freedom instead of the electron charge. Graphene can be engineered to form a number of different structures. In particular, by appropriately cutting it one can obtain 1-D system -with only a few nanometers in width - known as graphene nanoribbon, which strongly owe their properties to the width of the ribbons and to the atomic structure along the edges. Those GNR-based systems have been shown to have great potential applications specially as connectors for integrated circuits. Impurities and defects might play an important role to the coherence of these systems. In particular, the presence of transition metal atoms can lead to significant spin-flip processes of conduction electrons. Understanding this effect is of utmost importance for spintronics applied design. In this work, we focus on electronic transport properties of armchair graphene nanoribbons with adsorbed transition metal atoms as impurities and taking into account the spin-orbit effect. Our calculations were performed using a combination of density functional theory and non-equilibrium Greens functions. Also, employing a recursive method we consider a large number of impurities randomly distributed along the nanoribbon in order to infer, for different concentrations of defects, the spin-coherence length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il saggio, genere di confine (Grenzgänger-Textsorte) per eccellenza, la cui indefinibilità è topos, si profila tuttora come terra incognita nell’àmbito delle scienze della traduzione. La presente ricerca mira a enucleare un modello traduttologico olistico per la traduzione del saggio. In feconda alternativa alla dicotomia approccio ermeneutico-letterario vs. approccio linguistico, la prospettiva teorico-metodologica del lavoro integra linee di ricerca filologico-letterarie e linguistico-testuali. Tale sguardo multiprospettico, l’unico in grado di dar conto della complessità del genere, permette di collocare operativamente il saggio e le sue varianti testuali principali (Textsortenvarianten), dal saggio specialistico (fachlicher Essay) al saggio poetico (poetischer Essay) sul continuum delle forme testuali comprese entro le dimensioni (scientifica, pragmatica, estetica) del Denkhandeln. Dalla produttiva intersezione tra la riflessione dell’Essayforschung classica e contemporanea e le più recenti indagini linguistico-testuali sulle forme del saggismo scientifico, si perviene alla formulazione di una definitio per proprietates del saggio. Segue lo sviluppo di un modello traduttologico olistico, che tesaurizza il proprio paradigma antropologico, la riflessione filosofico-ermeneutica e le acquisizioni della linguistica testuale, articolandosi attraverso le fasi ricorsive e interagenti di ricezione olistica, analisi poetico-ermeneutica e retorico-stilistica, progettazione linguistico-cognitiva, formulazione e revisione. L’approccio olistico così delinatosi viene quindi vagliato nella sua proficuità in sede applicativa. Funge da banco di prova un vero e proprio “caso limite” per complessità e qualità letteraria, ovvero il «poetischer Essay» del poeta, saggista e traduttore Durs Grünbein, una delle voci più acclamate nel panorama contemporaneo. La sezione pratica presenta infine l’inedita traduzione italiana dei saggi grünbeiniani Den Körper zerbrechen e Die Bars von Atlantis.