872 resultados para Space and time.
Resumo:
Key technology applications like magnetoresistive sensors or the Magnetic Random Access Memory (MRAM) require reproducible magnetic switching mechanisms. i.e. predefined remanent states. At the same time advanced magnetic recording schemes push the magnetic switching time into the gyromagnetic regime. According to the Landau-Lifschitz-Gilbert formalism, relevant questions herein are associated with magnetic excitations (eigenmodes) and damping processes in confined magnetic thin film structures.rnObjects of study in this thesis are antiparallel pinned synthetic spin valves as they are extensively used as read heads in today’s magnetic storage devices. In such devices a ferromagnetic layer of high coercivity is stabilized via an exchange bias field by an antiferromagnet. A second hard magnetic layer, separated by a non-magnetic spacer of defined thickness, aligns antiparallel to the first. The orientation of the magnetization vector in the third ferromagnetic NiFe layer of low coercivity - the freelayer - is then sensed by the Giant MagnetoResistance (GMR) effect. This thesis reports results of element specific Time Resolved Photo-Emission Electron Microscopy (TR-PEEM) to image the magnetization dynamics of the free layer alone via X-ray Circular Dichroism (XMCD) at the Ni-L3 X-ray absorption edge.rnThe ferromagnetic systems, i.e. micron-sized spin valve stacks of typically deltaR/R = 15% and Permalloy single layers, were deposited onto the pulse leading centre stripe of coplanar wave guides, built in thin film wafer technology. The ferromagnetic platelets have been applied with varying geometry (rectangles, ellipses and squares), lateral dimension (in the range of several micrometers) and orientation to the magnetic field pulse to study the magnetization behaviour in dependence of these magnitudes. The observation of magnetic switching processes in the gigahertz range became only possible due to the joined effort of producing ultra-short X-ray pulses at the synchrotron source BESSY II (operated in the so-called low-alpha mode) and optimizing the wave guide design of the samples for high frequency electromagnetic excitation (FWHM typically several 100 ps). Space and time resolution of the experiment could be reduced to d = 100 nm and deltat = 15 ps, respectively.rnIn conclusion, it could be shown that the magnetization dynamics of the free layer of a synthetic GMR spin valve stack deviates significantly from a simple phase coherent rotation. In fact, the dynamic response of the free layer is a superposition of an averaged critically damped precessional motion and localized higher order spin wave modes. In a square platelet a standing spin wave with a period of 600 ps (1.7 GHz) was observed. At a first glance, the damping coefficient was found to be independent of the shape of the spin-valve element, thus favouring the model of homogeneous rotation and damping. Only by building the difference in the magnetic rotation between the central region and the outer rim of the platelet, the spin wave becomes visible. As they provide an additional efficient channel for energy dissipation, spin waves contribute to a higher effective damping coefficient (alpha = 0.01). Damping and magnetic switching behaviour in spin valves thus depend on the geometry of the element. Micromagnetic simulations reproduce the observed higher-order spin wave mode.rnBesides the short-run behaviour of the magnetization of spin valves Permalloy single layers with thicknesses ranging from 3 to 40 nm have been studied. The phase velocity of a spin wave in a 3 nm thick ellipse could be determined to 8.100 m/s. In a rectangular structure exhibiting a Landau-Lifschitz like domain pattern, the speed of the field pulse induced displacement of a 90°-Néel wall has been determined to 15.000 m/s.rn
Resumo:
La città medievale di Leopoli-Cencelle (fondata da Papa Leone IV nell‘854 d.C. non lontano da Civitavecchia) è stata oggetto di studio e di periodiche campagne di scavo a partire dal 1994. Le stratigrafie investigate con metodi tradizionali, hanno portato alla luce le numerose trasformazioni che la città ha subìto nel corso della sua esistenza in vita. Case, torri, botteghe e strati di vissuto, sono stati interpretati sin dall’inizio dello scavo basandosi sulla documentazione tradizionale e bi-dimensionale, legata al dato cartaceo e al disegno. Il presente lavoro intende re-interpretare i dati di scavo con l’ausilio delle tecnologie digitali. Per il progetto sono stati utilizzati un laser scanner, tecniche di Computer Vision e modellazione 3D. I tre metodi sono stati combinati in modo da poter visualizzare tridimensionalmente gli edifici abitativi scavati, con la possibilità di sovrapporre semplici modelli 3D che permettano di formulare ipotesi differenti sulla forma e sull’uso degli spazi. Modellare spazio e tempo offrendo varie possibilità di scelta, permette di combinare i dati reali tridimensionali, acquisiti con un laser scanner, con semplici modelli filologici in 3D e offre l’opportunità di valutare diverse possibili interpretazioni delle caratteristiche dell’edificio in base agli spazi, ai materiali, alle tecniche costruttive. Lo scopo del progetto è andare oltre la Realtà Virtuale, con la possibilità di analizzare i resti e di re-interpretare la funzione di un edificio, sia in fase di scavo che a scavo concluso. Dal punto di vista della ricerca, la possibilità di visualizzare le ipotesi sul campo favorisce una comprensione più profonda del contesto archeologico. Un secondo obiettivo è la comunicazione a un pubblico di “non-archeologi”. Si vuole offrire a normali visitatori la possibilità di comprendere e sperimentare il processo interpretativo, fornendo loro qualcosa in più rispetto a una sola ipotesi definitiva.
Resumo:
Architettura e musica. Spazio e tempo. Suono. Esperienza. Queste le parole chiave da cui ha preso avvio la mia ricerca. Tutto è iniziato dall’intuizione dell’esistenza di un legame tra due discipline cui ho dedicato molto tempo e studio, completando due percorsi accademici paralleli, la Facoltà di architettura e il Conservatorio. Dopo un lavoro d’individuazione e analisi degli infiniti spunti di riflessione che il tema offriva, ho focalizzato l’attenzione su uno degli esempi più emblematici di collaborazione tra un architetto e un musicista realizzatasi nel Novecento: Prometeo, tragedia dell’ascolto (1984), composta da Luigi Nono con la collaborazione di Massimo Cacciari e Renzo Piano. Attraverso lo studio di Prometeo ho potuto affrontare la trattazione di molte delle possibili declinazioni del rapporto interdisciplinare tra musica e architettura. La ricerca si è svolta principalmente sullo studio dei materiali conservati presso l’Archivio Luigi Nono e l’archivio della Fondazione Renzo Piano. La tesi è organizzata in tre parti: una prima parte in cui si affronta il tema del ruolo dello spazio nelle opere di Nono precedenti a Prometeo, facendo emergere l’importanza dell’ambiente culturale e sonoro veneziano; una seconda parte in cui si approfondisce il processo compositivo che ha portato alle rappresentazioni di Prometeo a Venezia, Milano e a Parigi; una terza parte in cui si prende in considerazione quanto avvenuto dopo Prometeo e si riflette sui contributi che questa esperienza può portare alla progettazione di spazi per la musica, analizzando diversi allestimenti dell’opera senza arca e prendendo in considerazione i progetti dell’auditorium dell’International Art Village di Akiyoshidai e della sala della nuova Philharmonie di Parigi. Lo studio dell’esperienza di Prometeo ha lo scopo di stimolare la curiosità verso la ricerca e la sperimentazione di quegli infiniti possibili della composizione architettonica e musicale di cui parla Nono.
Resumo:
The 1-D 1/2-spin XXZ model with staggered external magnetic field, when restricting to low field, can be mapped into the quantum sine-Gordon model through bosonization: this assures the presence of soliton, antisoliton and breather excitations in it. In particular, the action of the staggered field opens a gap so that these physical objects are stable against energetic fluctuations. In the present work, this model is studied both analytically and numerically. On the one hand, analytical calculations are made to solve exactly the model through Bethe ansatz: the solution for the XX + h staggered model is found first by means of Jordan-Wigner transformation and then through Bethe ansatz; after this stage, efforts are made to extend the latter approach to the XXZ + h staggered model (without finding its exact solution). On the other hand, the energies of the elementary soliton excitations are pinpointed through static DMRG (Density Matrix Renormalization Group) for different values of the parameters in the hamiltonian. Breathers are found to be in the antiferromagnetic region only, while solitons and antisolitons are present both in the ferromagnetic and antiferromagnetic region. Their single-site z-magnetization expectation values are also computed to see how they appear in real space, and time-dependent DMRG is employed to realize quenches on the hamiltonian parameters to monitor their time-evolution. The results obtained reveal the quantum nature of these objects and provide some information about their features. Further studies and a better understanding of their properties could bring to the realization of a two-level state through a soliton-antisoliton pair, in order to implement a qubit.
Resumo:
Klimamontoring benötigt eine operative, raum-zeitliche Analyse der Klimavariabilität. Mit dieser Zielsetzung, funktionsbereite Karten regelmäßig zu erstellen, ist es hilfreich auf einen Blick, die räumliche Variabilität der Klimaelemente in der zeitlichen Veränderungen darzustellen. Für aktuelle und kürzlich vergangene Jahre entwickelte der Deutsche Wetterdienst ein Standardverfahren zur Erstellung solcher Karten. Die Methode zur Erstellung solcher Karten variiert für die verschiedenen Klimaelemente bedingt durch die Datengrundlage, die natürliche Variabilität und der Verfügbarkeit der in-situ Daten.rnIm Rahmen der Analyse der raum-zeitlichen Variabilität innerhalb dieser Dissertation werden verschiedene Interpolationsverfahren auf die Mitteltemperatur der fünf Dekaden der Jahre 1951-2000 für ein relativ großes Gebiet, der Region VI der Weltorganisation für Meteorologie (Europa und Naher Osten) angewendet. Die Region deckt ein relativ heterogenes Arbeitsgebiet von Grönland im Nordwesten bis Syrien im Südosten hinsichtlich der Klimatologie ab.rnDas zentrale Ziel der Dissertation ist eine Methode zur räumlichen Interpolation der mittleren Dekadentemperaturwerte für die Region VI zu entwickeln. Diese Methode soll in Zukunft für die operative monatliche Klimakartenerstellung geeignet sein. Diese einheitliche Methode soll auf andere Klimaelemente übertragbar und mit der entsprechenden Software überall anwendbar sein. Zwei zentrale Datenbanken werden im Rahmen dieser Dissertation verwendet: So genannte CLIMAT-Daten über dem Land und Schiffsdaten über dem Meer.rnIm Grunde wird die Übertragung der Punktwerte der Temperatur per räumlicher Interpolation auf die Fläche in drei Schritten vollzogen. Der erste Schritt beinhaltet eine multiple Regression zur Reduktion der Stationswerte mit den vier Einflussgrößen der Geographischen Breite, der Höhe über Normalnull, der Jahrestemperaturamplitude und der thermischen Kontinentalität auf ein einheitliches Niveau. Im zweiten Schritt werden die reduzierten Temperaturwerte, so genannte Residuen, mit der Interpolationsmethode der Radialen Basis Funktionen aus der Gruppe der Neuronalen Netzwerk Modelle (NNM) interpoliert. Im letzten Schritt werden die interpolierten Temperaturraster mit der Umkehrung der multiplen Regression aus Schritt eins mit Hilfe der vier Einflussgrößen auf ihr ursprüngliches Niveau hochgerechnet.rnFür alle Stationswerte wird die Differenz zwischen geschätzten Wert aus der Interpolation und dem wahren gemessenen Wert berechnet und durch die geostatistische Kenngröße des Root Mean Square Errors (RMSE) wiedergegeben. Der zentrale Vorteil ist die wertegetreue Wiedergabe, die fehlende Generalisierung und die Vermeidung von Interpolationsinseln. Das entwickelte Verfahren ist auf andere Klimaelemente wie Niederschlag, Schneedeckenhöhe oder Sonnenscheindauer übertragbar.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
Climate change is expected to profoundly influence the hydrosphere of mountain ecosystems. The focus of current process-based research is centered on the reaction of glaciers and runoff to climate change; spatially explicit impacts on soil moisture remain widely neglected. We spatio-temporally analyzed the impact of the climate on soil moisture in a mesoscale high mountain catchment to facilitate the development of mitigation and adaptation strategies at the level of vegetation patterns. Two regional climate models were downscaled using three different approaches (statistical downscaling, delta change, and direct use) to drive a hydrological model (WaSiM-ETH) for reference and scenario period (1960–1990 and 2070–2100), resulting in an ensemble forecast of six members. For all ensembles members we found large changes in temperature, resulting in decreasing snow and ice storage and earlier runoff, but only small changes in evapotranspiration. The occurrence of downscaled dry spells was found to fluctuate greatly, causing soil moisture depletion and drought stress potential to show high variability in both space and time. In general, the choice of the downscaling approach had a stronger influence on the results than the applied regional climate model. All of the results indicate that summer soil moisture decreases, which leads to more frequent declines below a critical soil moisture level and an advanced evapotranspiration deficit. Forests up to an elevation of 1800 m a.s.l. are likely to be threatened the most, while alpine areas and most pastures remain nearly unaffected. Nevertheless, the ensemble variability was found to be extremely high and should be interpreted as a bandwidth of possible future drought stress situations.
Resumo:
Unraveling intra- and inter-cellular signaling networks managing cell-fate control, coordinating complex differentiation regulatory circuits and shaping tissues and organs in living systems remain major challenges in the post-genomic era. Resting on the laurels of past-century monolayer culture technologies, the cell culture community has only recently begun to appreciate the potential of three-dimensional mammalian cell culture systems to reveal the full scope of mechanisms orchestrating the tissue-like cell quorum in space and time. Capitalizing on gravity-enforced self-assembly of monodispersed primary embryonic mouse cells in hanging drops, we designed and characterized a three-dimensional cell culture model for ganglion-like structures. Within 24h, a mixture of mouse embryonic fibroblasts (MEF) and cells, derived from the dorsal root ganglion (DRG) (sensory neurons and Schwann cells) grown in hanging drops, assembled to coherent spherical microtissues characterized by a MEF feeder core and a peripheral layer of DRG-derived cells. In a time-dependent manner, sensory neurons formed a polar ganglion-like cap structure, which coordinated guided axonal outgrowth and innervation of the distal pole of the MEF feeder spheroid. Schwann cells, present in embryonic DRG isolates, tended to align along axonal structures and myelinate them in an in vivo-like manner. Whenever cultivation exceeded 10 days, DRG:MEF-based microtissues disintegrated due to an as yet unknown mechanism. Using a transgenic MEF feeder spheroid, engineered for gaseous acetaldehyde-inducible interferon-beta (ifn-beta) production by cotransduction of retro-/ lenti-viral particles, a short 6-h ifn-beta induction was sufficient to rescue the integrity of DRG:MEF spheroids and enable long-term cultivation of these microtissues. In hanging drops, such microtissues fused to higher-order macrotissue-like structures, which may pave the way for sophisticated bottom-up tissue engineering strategies. DRG:MEF-based artificial micro- and macrotissue design demonstrated accurate key morphological aspects of ganglions and exemplified the potential of self-assembled scaffold-free multicellular micro-/macrotissues to provide new insight into organogenesis.
Resumo:
The last two decades have seen intense scientific and regulatory interest in the health effects of particulate matter (PM). Influential epidemiological studies that characterize chronic exposure of individuals rely on monitoring data that are sparse in space and time, so they often assign the same exposure to participants in large geographic areas and across time. We estimate monthly PM during 1988-2002 in a large spatial domain for use in studying health effects in the Nurses' Health Study. We develop a conceptually simple spatio-temporal model that uses a rich set of covariates. The model is used to estimate concentrations of PM10 for the full time period and PM2.5 for a subset of the period. For the earlier part of the period, 1988-1998, few PM2.5 monitors were operating, so we develop a simple extension to the model that represents PM2.5 conditionally on PM10 model predictions. In the epidemiological analysis, model predictions of PM10 are more strongly associated with health effects than when using simpler approaches to estimate exposure. Our modeling approach supports the application in estimating both fine-scale and large-scale spatial heterogeneity and capturing space-time interaction through the use of monthly-varying spatial surfaces. At the same time, the model is computationally feasible, implementable with standard software, and readily understandable to the scientific audience. Despite simplifying assumptions, the model has good predictive performance and uncertainty characterization.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.
Resumo:
From its original formulation in 1990 the International Trans-Antarctic Scientific Expedition (ITASE) has had as its primary aim the collection and interpretation of a continent-wide array of environmental parameters assembled through the coordinated efforts of scientists from several nations. ITASE offers the ground-based opportunities of traditional-style traverse travel coupled with the modern technology of CPS, crevasse detecting radar, satellite communications and multidisciplinary research. By operating predominantly in the mode of an oversnow traverse, ITASE offers scientists the opportunity to experience the dynamic range of the Antarctic environment. ITASE also offers an important interactive venue for research similar to that afforded by oceanographic research vessels and large polar field camps, without the cost of the former or the lack of mobility of the latter. More importantly, the combination of disciplines represented by ITASE provides a unique, multidimensional (space and time) view of the ice sheet and its history. ITASE has now collected > 20 000 km of snow radar, recovered more than 240 firn/ice cores (total length 7000m), remotely penetrated to similar to 4000m into the ice sheet, and sampled the atmosphere to heights of > 20 km.
Resumo:
Time-space relations of extension and volcanism place critical constraints on models of Basin and Range extensional processes. This paper addresses such relations in a 130-km-wide transect in the eastern Great Basin, bounded on the east by the Ely Springs Range and on the west by the Grant and Quinn Canyon ranges. Stratigraphic and structural data, combined with 40Ar/39Ar isotopic ages of volcanic rocks, document a protracted but distinctly episodic extensional history. Field relations indicate four periods of faulting. Only one of these periods was synchronous with nearby volcanic activity, which implies that volcanism and faulting need not be associated closely in space and time. Based on published dates and the analyses reported here, the periods of extension were (1) prevolcanic (pre-32 Ma), (2) early synvolcanic (30 to 27 Ma), (3) immediately postvolcanic (about 16 to 14 Ma), and (4) Pliocene to Quaternary. The break between the second and third periods is distinct. The minimum gap between the first two periods is 2 Ma, but the separation may be much larger. Temporal separation of the last two periods is only suggested by the stratigraphic record and cannot be rigorously demonstrated with present data. The three younger periods of faulting apparently occurred across the entire transect. The oldest period is recognized only at the eastern end of the transect, but appears to correlate about 150 km northward along strike with extension in the Northern Snake Range-Kern Mountains area. Therefore the oldest period also is regional in extent, but affected a different area than that affected by younger periods. This relation suggests that distinct extensional structures and master detachment faults were active at different times. The correlation of deformation periods of a few million years duration across the Railroad Valley-Pioche transect suggests that the scale of active extensional domains in the Great Basin may be greater than 100 km across strike.
Resumo:
Jakobshavn Isbrae is a major ice stream that drains the west-central Greenland ice sheet and becomes afloat in Jakobshavn Isfiord (69degreesN, 49degreesW), where it has maintained the world's fastest-known sustained velocity and calving rate (7 km a(-1)) for at least four decades. The floating portion is approximately 12 km long and 6 km wide. Surface elevations and motion vectors were determined photogrammetrically for about 500 crevasses on the floating ice, and adjacent grounded ice, using aerial photographs obtained 2 weeks apart in July 1985. Surface strain rates were computed from a mesh of 399 quadrilateral elements having velocity measurements at each corner. It is shown that heavy crevassing of floating ice invalidates the assumptions of linear strain theory that (i) surface strain in the floating ice is homogeneous in both space and time, (ii) the squares and products of strain components are nil, and (iii) first- and second-order rotation components are small compared to strain components. Therefore, strain rates and rotation rates were also computed using non-linear strain theory. The percentage difference between computed linear and non-linear second invariants of strain rate per element were greatest (mostly in the range 40-70%) where crevassing is greatest. Isopleths of strain rate parallel and transverse to flow and elevation isopleths relate crevassing to known and inferred pinning points.