973 resultados para Field Admitting (one-dimensional) Local Class Field Theory
Resumo:
Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.
Resumo:
Le dimensionnement basé sur la performance (DBP), dans une approche déterministe, caractérise les objectifs de performance par rapport aux niveaux de performance souhaités. Les objectifs de performance sont alors associés à l'état d'endommagement et au niveau de risque sismique établis. Malgré cette approche rationnelle, son application est encore difficile. De ce fait, des outils fiables pour la capture de l'évolution, de la distribution et de la quantification de l'endommagement sont nécessaires. De plus, tous les phénomènes liés à la non-linéarité (matériaux et déformations) doivent également être pris en considération. Ainsi, cette recherche montre comment la mécanique de l'endommagement pourrait contribuer à résoudre cette problématique avec une adaptation de la théorie du champ de compression modifiée et d'autres théories complémentaires. La formulation proposée adaptée pour des charges monotones, cycliques et de type pushover permet de considérer les effets non linéaires liés au cisaillement couplé avec les mécanismes de flexion et de charge axiale. Cette formulation est spécialement appliquée à l'analyse non linéaire des éléments structuraux en béton soumis aux effets de cisaillement non égligeables. Cette nouvelle approche mise en œuvre dans EfiCoS (programme d'éléments finis basé sur la mécanique de l'endommagement), y compris les critères de modélisation, sont également présentés ici. Des calibrations de cette nouvelle approche en comparant les prédictions avec des données expérimentales ont été réalisées pour les murs de refend en béton armé ainsi que pour des poutres et des piliers de pont où les effets de cisaillement doivent être pris en considération. Cette nouvelle version améliorée du logiciel EFiCoS a démontrée être capable d'évaluer avec précision les paramètres associés à la performance globale tels que les déplacements, la résistance du système, les effets liés à la réponse cyclique et la quantification, l'évolution et la distribution de l'endommagement. Des résultats remarquables ont également été obtenus en référence à la détection appropriée des états limites d'ingénierie tels que la fissuration, les déformations unitaires, l'éclatement de l'enrobage, l'écrasement du noyau, la plastification locale des barres d'armature et la dégradation du système, entre autres. Comme un outil pratique d'application du DBP, des relations entre les indices d'endommagement prédits et les niveaux de performance ont été obtenus et exprimés sous forme de graphiques et de tableaux. Ces graphiques ont été développés en fonction du déplacement relatif et de la ductilité de déplacement. Un tableau particulier a été développé pour relier les états limites d'ingénierie, l'endommagement, le déplacement relatif et les niveaux de performance traditionnels. Les résultats ont démontré une excellente correspondance avec les données expérimentales, faisant de la formulation proposée et de la nouvelle version d'EfiCoS des outils puissants pour l'application de la méthodologie du DBP, dans une approche déterministe.
Resumo:
This paper reviews the construction of quantum field theory on a 4-dimensional spacetime by combinatorial methods, and discusses the recent developments in the direction of a combinatorial construction of quantum gravity.
Resumo:
Single-walled carbon nanotubes (SWNTs) have been studied as a prominent class of high performance electronic materials for next generation electronics. Their geometry dependent electronic structure, ballistic transport and low power dissipation due to quasi one dimensional transport, and their capability of carrying high current densities are some of the main reasons for the optimistic expectations on SWNTs. However, device applications of individual SWNTs have been hindered by uncontrolled variations in characteristics and lack of scalable methods to integrate SWNTs into electronic devices. One relatively new direction in SWNT electronics, which avoids these issues, is using arrays of SWNTs, where the ensemble average may provide uniformity from device to device, and this new breed of electronic material can be integrated into electronic devices in a scalable fashion. This dissertation describes (1) methods for characterization of SWNT arrays, (2) how the electrical transport in these two-dimensional arrays depend on length scales and spatial anisotropy, (3) the interaction of aligned SWNTs with the underlying substrate, and (4) methods for scalable integration of SWNT arrays into electronic devices. The electrical characterization of SWNT arrays have been realized by polymer electrolyte-gated SWNT thin film transistors (TFTs). Polymer electrolyte-gating addresses many technical difficulties inherent to electrical characterization by gating through oxide-dielectrics. Having shown polymer electrolyte-gating can be successfully applied on SWNT arrays, we have studied the length scaling dependence of electrical transport in SWNT arrays. Ultrathin films formed by sub-monolayer surface coverage of SWNT arrays are very interesting systems in terms of the physics of two-dimensional electronic transport. We have observed that they behave qualitatively different than the classical conducting films, which obey the Ohm’s law. The resistance of an ultrathin film of SWNT arrays is indeed non-linear with the length of the film, across which the transport occurs. More interestingly, a transition between conducting and insulating states is observed at a critical surface coverage, which is called percolation limit. The surface coverage of conducting SWNTs can be manipulated by turning on and off the semiconductors in the SWNT array, leading to the operation principle of SWNT TFTs. The percolation limit depends also on the length and the spatial orientation of SWNTs. We have also observed that the percolation limit increases abruptly for aligned arrays of SWNTs, which are grown on single crystal quartz substrates. In this dissertation, we also compare our experimental results with a two-dimensional stick network model, which gives a good qualitative picture of the electrical transport in SWNT arrays in terms of surface coverage, length scaling, and spatial orientation, and briefly discuss the validity of this model. However, the electronic properties of SWNT arrays are not only determined by geometrical arguments. The contact resistances at the nanotube-nanotube and nanotube-electrode (bulk metal) interfaces, and interactions with the local chemical groups and the underlying substrates are among other issues related to the electronic transport in SWNT arrays. Different aspects of these factors have been studied in detail by many groups. In fact, I have also included a brief discussion about electron injection onto semiconducting SWNTs by polymer dopants. On the other hand, we have compared the substrate-SWNT interactions for isotropic (in two dimensions) arrays of SWNTs grown on Si/SiO2 substrates and horizontally (on substrate) aligned arrays of SWNTs grown on single crystal quartz substrates. The anisotropic interactions associated with the quartz lattice between quartz and SWNTs that allow near perfect horizontal alignment on substrate along a particular crystallographic direction is examined by Raman spectroscopy, and shown to lead to uniaxial compressive strain in as-grown SWNTs on single crystal quartz. This is the first experimental demonstration of the hard-to-achieve uniaxial compression of SWNTs. Temperature dependence of Raman G-band spectra along the length of individual nanotubes reveals that the compressive strain is non-uniform and can be larger than 1% locally at room temperature. Effects of device fabrication steps on the non-uniform strain are also examined and implications on electrical performance are discussed. Based on our findings, there are discussions about device performances and designs included in this dissertation. The channel length dependences of device mobilities and on/off ratios are included for SWNT TFTs. Time response of polymer-electrolyte gated SWNT TFTs has been measured to be ~300 Hz, and a proof-of-concept logic inverter has been fabricated by using polymer electrolyte gated SWNT TFTs for macroelectronic applications. Finally, I dedicated a chapter on scalable device designs based on aligned arrays of SWNTs, including a design for SWNT memory devices.
Resumo:
Current space exploration has transpired through the use of chemical rockets, and they have served us well, but they have their limitations. Exploration of the outer solar system, Jupiter and beyond will most likely require a new generation of propulsion system. One potential technology class to provide spacecraft propulsion and power systems involve thermonuclear fusion plasma systems. In this class it is well accepted that d-He3 fusion is the most promising of the fuel candidates for spacecraft applications as the 14.7 MeV protons carry up to 80% of the total fusion power while ‘s have energies less than 4 MeV. The other minor fusion products from secondary d-d reactions consisting of 3He, n, p, and 3H also have energies less than 4 MeV. Furthermore there are two main fusion subsets namely, Magnetic Confinement Fusion devices and Inertial Electrostatic Confinement (or IEC) Fusion devices. Magnetic Confinement Fusion devices are characterized by complex geometries and prohibitive structural mass compromising spacecraft use at this stage of exploration. While generating energy from a lightweight and reliable fusion source is important, another critical issue is harnessing this energy into usable power and/or propulsion. IEC fusion is a method of fusion plasma confinement that uses a series of biased electrodes that accelerate a uniform spherical beam of ions into a hollow cathode typically comprised of a gridded structure with high transparency. The inertia of the imploding ion beam compresses the ions at the center of the cathode increasing the density to the point where fusion occurs. Since the velocity distributions of fusion particles in an IEC are essentially isotropic and carry no net momentum, a means of redirecting the velocity of the particles is necessary to efficiently extract energy and provide power or create thrust. There are classes of advanced fuel fusion reactions where direct-energy conversion based on electrostatically-biased collector plates is impossible due to potential limits, material structure limitations, and IEC geometry. Thermal conversion systems are also inefficient for this application. A method of converting the isotropic IEC into a collimated flow of fusion products solves these issues and allows direct energy conversion. An efficient traveling wave direct energy converter has been proposed and studied by Momota , Shu and further studied by evaluated with numerical simulations by Ishikawa and others. One of the conventional methods of collimating charged particles is to surround the particle source with an applied magnetic channel. Charged particles are trapped and move along the lines of flux. By introducing expanding lines of force gradually along the magnetic channel, the velocity component perpendicular to the lines of force is transferred to the parallel one. However, efficient operation of the IEC requires a null magnetic field at the core of the device. In order to achieve this, Momota and Miley have proposed a pair of magnetic coils anti-parallel to the magnetic channel creating a null hexapole magnetic field region necessary for the IEC fusion core. Numerically, collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 95% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A while collimation of electrons with stabilization coil present was demonstrated to reach 69% at a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A. Experimentally, collimation of electrons with stabilization coil present was demonstrated experimentally to be 35% at 100 eV and reach a peak of 39.6% at 50eV with a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A and collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 49% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A 6.4% of the 300eV electrons’ initial velocity is directed to the collector plates. The remaining electrons are trapped by the collimator’s magnetic field. These particles oscillate around the null field region several hundred times and eventually escape to the collector plates. At a solenoid voltage profile of 7 Volts, 100 eV electrons are collimated with wall and perpendicular component losses of 31%. Increasing the electron energy beyond 100 eV increases the wall losses by 25% at 300 eV. Ultimately it was determined that a field strength deriving from 9.5 MAT/m would be required to collimate 14.7 MeV fusion protons from d-3He fueled IEC fusion core. The concept of the proton collimator has been proven to be effective to transform an isotropic source into a collimated flow of particles ripe for direct energy conversion.
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.
Resumo:
Les algèbres de Temperley-Lieb originales, aussi dites régulières, apparaissent dans de nombreux modèles statistiques sur réseau en deux dimensions: les modèles d'Ising, de Potts, des dimères, celui de Fortuin-Kasteleyn, etc. L'espace d'Hilbert de l'hamiltonien quantique correspondant à chacun de ces modèles est un module pour cette algèbre et la théorie de ses représentations peut être utilisée afin de faciliter la décomposition de l'espace en blocs; la diagonalisation de l'hamiltonien s'en trouve alors grandement simplifiée. L'algèbre de Temperley-Lieb diluée joue un rôle similaire pour des modèles statistiques dilués, par exemple un modèle sur réseau où certains sites peuvent être vides; ses représentations peuvent alors être utilisées pour simplifier l'analyse du modèle comme pour le cas original. Or ceci requiert une connaissance des modules de cette algèbre et de leur structure; un premier article donne une liste complète des modules projectifs indécomposables de l'algèbre diluée et un second les utilise afin de construire une liste complète de tous les modules indécomposables des algèbres originale et diluée. La structure des modules est décrite en termes de facteurs de composition et par leurs groupes d'homomorphismes. Le produit de fusion sur l'algèbre de Temperley-Lieb originale permet de «multiplier» ensemble deux modules sur cette algèbre pour en obtenir un autre. Il a été montré que ce produit pouvait servir dans la diagonalisation d'hamiltoniens et, selon certaines conjectures, il pourrait également être utilisé pour étudier le comportement de modèles sur réseaux dans la limite continue. Un troisième article construit une généralisation du produit de fusion pour les algèbres diluées, puis présente une méthode pour le calculer. Le produit de fusion est alors calculé pour les classes de modules indécomposables les plus communes pour les deux familles, originale et diluée, ce qui vient ajouter à la liste incomplète des produits de fusion déjà calculés par d'autres chercheurs pour la famille originale. Finalement, il s'avère que les algèbres de Temperley-Lieb peuvent être associées à une catégorie monoïdale tressée, dont la structure est compatible avec le produit de fusion décrit ci-dessus. Le quatrième article calcule explicitement ce tressage, d'abord sur la catégorie des algèbres, puis sur la catégorie des modules sur ces algèbres. Il montre également comment ce tressage permet d'obtenir des solutions aux équations de Yang-Baxter, qui peuvent alors être utilisées afin de construire des modèles intégrables sur réseaux.
Resumo:
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object-initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.
Resumo:
El presente trabajo se realizó con el objetivo de tener una visión completa de las teorías del liderazgo, teniendo de este una concepción como proceso y poder examinar las diversas formas de aplicación en las organizaciones contemporáneas. El tema es enfocado desde la perspectiva organizacional, un mundo igualmente complejo, sin desconocer su importancia en otros ámbitos como la educación, la política o la dirección del estado. Su enfoque tiene que ver con el estudio académico del cual es la culminación y se enmarca dentro de la perspectiva constitucional de la Carta Política Colombiana que reconoce la importancia capital que tienen la actividad económica y la iniciativa privada en la constitución de empresas. Las diversas visiones del liderazgo han sido aplicadas de distintas maneras en las organizaciones contemporáneas y han generado diversos resultados. Hoy, no es posible pensar en una organización que no haya definido su forma de liderazgo y en consecuencia, confluyen en el campo empresarial multitud de teorías, sin que pueda afirmarse que una sola de ellas permita el manejo adecuado y el cumplimiento de los objetivos misionales. Por esta razón se ha llegado a concebir el liderazgo como una función compleja, en un mundo donde las organizaciones mismas se caracterizan no solo por la complejidad de sus acciones y de su conformación, sino también porque esta característica pertenece también al mundo de la globalización. Las organizaciones concebidas como máquinas que en sentido metafórico logran reconstituirse sus estructuras a medida que están en interacción con otras en el mundo globalizado. Adaptarse a las cambiantes circunstancias hace de las organizaciones conglomerados en permanente dinámica y evolución. En este ámbito puede decirse que el liderazgo es también complejo y que es el liderazgo transformacional el que más se acerca al sentido de la complejidad.
Resumo:
O presente relatório foi produzido no âmbito da unidade curricular Prática de Ensino Supervisionada, que faz parte do Mestrado em Ensino do Português no 3º Ciclo do Ensino Básico e Ensino Secundário e de Espanhol nos Ensinos Básico e Secundário, sob a orientação da Professora Doutora Ângela Maria Franco Martins Coelho de Paiva Balça. Identifica-se, na sua essência basilar, como um trabalho reflexivo-descritivo sobre a prática aplicada e efetuada no ano letivo 2015/2016, no lecionamento das disciplinas de Português em duas turmas de 10º ano, e de Espanhol – Língua Estrangeira I numa de 7º ano, na Escola Secundária/3 Rainha Santa Isabel, de Estremoz. Além do mais, também constitui o expoente de todo o processo levado a cabo durante os dois anos do Mestrado, o qual permitiu e conduziu à revisão, modificação, inovação e progressão em matéria de conceitos, ideias, noções, ações e teorias, quer fossem mais antigas ou recentes. Este é o produto final e contributo para o desenvolvimento e melhoria a nível pessoal e profissional. Através do conhecimento da literatura teórica e da sua aplicação na ação, a reflexão compromete-nos a cumprir uma prática fundamentada e apoiada em toda a documentação mundial, europeia e portuguesa normativa e de referência para o exercício da profissão docente o mais completo e eficaz possível. Mais do que um relatório, é uma avaliação orientativa da dimensão transformadora no desempenho docente que, na sua parte mais cogitativa, expõe estruturalmente: a observação e o seu registo; a observação em contexto; a planificação; a orientação; a componente letiva – aulas lecionadas (análise, aprendizagem e melhorias) e a pesquisa reflexiva na abordagem dos inquéritos passados nas turmas de Português e de Espanhol; e, por fim, a abordagem reflexiva sobre a avaliação formativa das aprendizagens realizada às turmas de 10º ano, na disciplina de Português; ABSTRACT: This report was produced in the scope of Supervised Teaching Practice’s curricular unit, which is part of the Master’s Degree in Teaching Portuguese for the 3rd stage of Primary Education and Secondary Education, and Spanish Foreign Language Teaching for Primary and Secondary Education, under the supervision of Dr. Ângela Maria Franco Martins Coelho de Paiva Balça. In its basic essence, this is a reflective and descriptive paper about practices applied and performed for the 2015-2016 school year to teach Portuguese, in two tenth grade classes, and Spanish as a Foreign Language, in one seventh grade class at Rainha Santa Isabel School of Estremoz. Furthermore, it outlines the entire process carried out during the two years of the Master’s Degree, which provided and led to review, change, breakthrough, and advancement regarding concepts, ideas, assumptions, and theories, whether they were pre-existing or more recent. This is the final product and the contribution towards development and improvement in personal and professional terms. Through knowledge of theoretical literature and applying it to practice, the reflection leads us to compile substantiated and supported practice in all worldwide, European, and Portuguese standards and reference documentation for the most effective pursuit of the profession. More than a report, this is an evaluation of transformation in teaching performance that structurally examines the following: observation and its registration; observation in the field; lesson design; guidance and monitoring; a teaching component (analysis, apprenticeships, and improvements) with a reflective element based on the results of the Portuguese and Spanish class surveys; and, finally, a reflexive approach about formative assessment of student learning that took place within the Portuguese course.
Resumo:
This dissertation explores the entanglement between the visionary capacity of feminist theory to shape sustainable futures and the active contribution of feminist speculative fiction to the conceptual debate about the climate crisis. Over the last few years, increasing critical attention has been paid to ecofeminist perspectives on climate change, that see as a core cause of the climate crisis the patriarchal domination of nature, considered to go hand in hand with the oppression of women. What remains to be thoroughly scrutinised is the linkage between ecofeminist theories and other ethical stances capable of countering colonising epistemologies of mastery and dominion over nature. This dissertation intervenes in the debate about the master narrative of the Anthropocene, and about the one-dimensional perspective that often characterises its literary representations, from a feminist perspective that also aims at decolonising the imagination; it looks at literary texts that consider patriarchal domination of nature in its intersections with other injustices that play out within the Anthropocene, with a particular focus on race, colonialism, and capitalism. After an overview of the linkages between gender and climate change and between feminism and environmental humanities, it introduces the genre of climate fiction examining its main tropes. In an attempt to find alternatives to the mainstream narrative of the Anthropocene (namely to its gender-neutrality, colour-blindness, and anthropocentrism), it focuses on contemporary works of speculative fiction by four Anglophone women authors that particularly address the inequitable impacts of climate change experienced not only by women, but also by sexualised, racialised, and naturalised Others. These texts were chosen because of their specific engagement with the relationship between climate change, global capitalism, and a flat trust in techno-fixes on the one hand, and structural inequalities generated by patriarchy, racism, and intersecting systems of oppression on the other.
Resumo:
In this thesis I show a triple new connection we found between quantum integrability, N=2 supersymmetric gauge theories and black holes perturbation theory. I use the approach of the ODE/IM correspondence between Ordinary Differential Equations (ODE) and Integrable Models (IM), first to connect basic integrability functions - the Baxter’s Q, T and Y functions - to the gauge theory periods. This fundamental identification allows several new results for both theories, for example: an exact non linear integral equation (Thermodynamic Bethe Ansatz, TBA) for the gauge periods; an interpretation of the integrability functional relations as new exact R-symmetry relations for the periods; new formulas for the local integrals of motion in terms of gauge periods. This I develop in all details at least for the SU(2) gauge theory with Nf=0,1,2 matter flavours. Still through to the ODE/IM correspondence, I connect the mathematically precise definition of quasinormal modes of black holes (having an important role in gravitational waves’ obervations) with quantization conditions on the Q, Y functions. In this way I also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories. Moreover, it follows a new simple and effective method to numerically compute the quasinormal modes - the TBA - which I compare with other standard methods. The spacetimes for which I show these in all details are in the simplest Nf=0 case the D3 brane in the Nf=1,2 case a generalization of extremal Reissner-Nordström (charged) black holes. Then I begin treating also the Nf=3,4 theories and argue on how our integrability-gauge-gravity correspondence can generalize to other types of black holes in either asymptotically flat (Nf=3) or Anti-de-Sitter (Nf=4) spacetime. Finally I begin to show the extension to a 4-fold correspondence with also Conformal Field Theory (CFT), through the renowned AdS/CFT correspondence.
Resumo:
A three-dimensional Direct Finite Element procedure is here presented which takes into account most of the factors affecting the interaction problem of the dam-water-foundation system, whilst keeping the computational cost at a reasonable level by introducing some simplified hypotheses. A truncated domain is defined, and the dynamic behaviour of the system is treated as a wave-scattering problem where the presence of the dam perturbs an original free-field system. The rock foundation truncated boundaries are enclosed by a set of free-field one-dimensional and two-dimensional systems which transmit the effective forces to the main model and apply adsorbing viscous boundaries to ensure radiation damping. The water domain is treated as an added mass moving with the dam. A strategy is proposed to keep the viscous dampers at the boundaries unloaded during the initial phases of analysis, when the static loads are initialised, and thus avoid spurious displacements. A focus is given to the nonlinear behaviour of the rock foundation, with concentrated plasticity along the natural discontinuities of the rock mass, immersed in an otherwise linear elastic medium with Rayleigh damping. The entire procedure is implemented in the commercial software Abaqus®, whose base code is enriched with specific user subroutines when needed. All the extra coding is attached to the Thesis and tested against analytical results and simple examples. Possible rock wedge instabilities induced by intense ground motion, which are not easily investigated within a comprehensive model of the dam-water-foundation system, are treated separately with a simplified decoupled dynamic approach derived from the classical Newmark method, integrated with FE calculation of dam thrust on the wedges during the earthquake. Both the described approaches are applied to the case study of the Ridracoli arch-gravity dam (Italy) in order to investigate its seismic response to the Maximum Credible Earthquake (MCE) in a full reservoir condition.
Resumo:
Land subsidence in urban areas represents a widespread geological hazard and a pressing challenge for modern society. This research focuses on the subsidence process affecting the city of Bologna (Italy). Since the 1960s, Bologna has experienced ground deformation due to aquifers overexploitation that peaked during the 1970s with rates of 10 cm/year. Despite a general reduction in these rates over the subsequent decades, thanks to groundwater regulations policies, recent data underscore a substantial subsidence resurgence. To reconstruct the subsurface stratigraphic architecture of Bologna’s urban area and generate a 3D geological model, a multidisciplinary approach centred on a stratigraphic analysis relying on the lithofacies criterion was adopted. The convergence of the analyses within this framework resulted in partitioning the study area into three geological domains exhibiting unique morphological features and depositional stacking patterns. Subsequently, since long-term data are crucial for a comprehensive understanding of ongoing subsidence, a methodology was developed to generate cumulative ground displacement time series and maps by integrating ground-based and remotely-sensed measurements. While the reconstructed long-term subsidence field consistently aligns with the primary geological variations summarised in the 3D model, the generated cumulative displacement curves systematically match pluriannual trends observed in groundwater level and pumping monitoring data. Lastly, to evaluate the expression of the observed relationships from a geotechnical perspective, a series of one-dimensional subsidence calculations were conducted considering the mechanical properties of the investigated deposits and piezometric data. These analyses provided valuable insight into the overall mechanical behaviour of the existing soils, as well as the post-pumping groundwater level and pore pressure distributions, consistent with field data. The methodological approach employed enables a comprehensive analysis of land subsidence in urban areas, allowing the exploration of individual factors governing the deformation process and their interactions, even within complex stratigraphic and hydrogeological environments such as Bologna’s urban area.