10 resultados para Long memory stochastic process
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
Mit der Zielsetzung der vorliegenden Arbeit wurde die detailierten Analyse von Migrationsdynamiken epithelilaler Monolayer anhand zweier neuartiger in vitro Biosensoren verfolgt, der elektrischen Zell-Substrat Impedanz Spektroskopie (electrical cell-substrate impedance sensing, ECIS) sowie der Quarz Kristall Mikrowaage (quartz crystal microbalance, QCM). Beide Methoden erwiesen sich als sensitiv gegenüber der Zellmotilität und der Nanozytotoxizität.rnInnerhalb des ersten Projektes wurde ein Fingerprinting von Krebszellen anhand ihrer Motilitätsdynamiken und der daraus generierten elektrischen oder akkustischen Fluktuationen auf ECIS oder QCM Basis vorgenommen; diese Echtzeitsensoren wurdene mit Hilfe klassicher in vitro Boyden-Kammer Migrations- und Invasions-assays validiert. Fluktuationssignaturen, also Langzeitkorrelationen oder fraktale Selbstähnlichkeit aufgrund der kollektiven Zellbewegung, wurden über Varianz-, Fourier- sowie trendbereinigende Fluktuationsanalyse quantifiziert. Stochastische Langzeitgedächtnisphänomene erwiesen sich als maßgebliche Beiträge zur Antwort adhärenter Zellen auf den QCM und ECIS-Sensoren. Des weiteren wurde der Einfluss niedermolekularer Toxine auf die Zytoslelettdynamiken verfolgt: die Auswirkungen von Cytochalasin D, Phalloidin und Blebbistatin sowie Taxol, Nocodazol und Colchicin wurden dabei über die QCM und ECIS Fluktuationsanalyse erfasst.rnIn einem zweiten Projektschwerpunkt wurden Adhäsionsprozesse sowie Zell-Zell und Zell-Substrat Degradationsprozesse bei Nanopartikelgabe charackterisiert, um ein Maß für Nanozytotoxizität in Abhangigkeit der Form, Funktionalisierung Stabilität oder Ladung der Partikel zu erhalten.rnAls Schlussfolgerung ist zu nennen, dass die neuartigen Echtzeit-Biosensoren QCM und ECIS eine hohe Zellspezifität besitzen, auf Zytoskelettdynamiken reagieren sowie als sensitive Detektoren für die Zellvitalität fungieren können.
Resumo:
This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
Die bei Lern- und Gedächtnisvorgängen ablaufenden neurobiologischen Prozesse sind in ihrer Funktion bis heute nur unzureichend verstanden, wobei besonders die Rolle der lernabhängigen Genexpression unklar ist. Wiederholungen im Lernprozess fördern die Bildung von stabilen Gedächtnisinhalten. Die Lerneffiktivität kann hierbei durch lernfreie Zeitintervalle, insbesondere durch eingeschobene Schalfperioden, zusätzlich gesteigert werden. Entsprechend kann man den mehrtägigen Morris Water Maze (MWM)-Test mit einer verborgenen Plattform als einen mehrstufigen räumlichen Lernprozess bezeichnen. Dieser Test ist Hippokampus-abhängig und produziert Langzeitgedächtnisspuren in Nagern. Für diese Studie wurden FVB/NxC57Bl/6-Mäuse der F1-Generation über vier Tage in der MWM trainiert, das Erlernte in einem Probe Trial an Tag 5 überprüft und die Tiere gemäß ihrer Lernleistung in die beiden Gruppen „gute“ und „schlechte Lerner“ eingeteilt. Eine Analyse der hippokampalen Expression von Kandidatengenen per Microarray und Real-Time PCR erfolgte eine, sechs beziehungsweise 24 Stunden nach dem jeweils letzten Trainingslauf eines Tages. Durch den Vergleich von Schwimmkontrollen mit Test-naiven Mäusen wurde eine gleichgeschaltete, mit dem impliziten Lernen der MWM-Erfahrung der Tiere assoziierte unspezifische Genexpression festgestellt. Beim Vergleich der Schwimmkontrollen (ohne Plattform) mit den trainierten Tieren (verborgene Plattform mit konstanter Lokalisation) wurde in guten Lernern zu bestimmten Zeitpunkten eine Hochregulation von Genen, die mit Lernen und Gedächtnis (PP1, Kibra), neuronaler Aktivität (mt-CO1), Epigenetik (Dnmt3a, Dnmt3b) und neurodegenerativen Erkrankungen (Mapt, Sorl1) assoziiert sind, gefunden. Im Hippokampus der schlechten Lerner wurde eine im Vergleich zu den guten Lernern gesteigerte Synthese der mRNA von Genen festgestellt, die mit Lernen und Gedächtnis (Reelin, PP1, Kibra), Epigenetik (Dnmt1, Dnmt3a, Dnmt3b) und neurodegenerativen Erkrankungen (Mapt, Sorl1, APP) in Zusammenhang stehen. Diese Studie liefert somit den bisher ersten Hinweis, dass während eines mehrtägigen MWM-Lernprozesses eine abnormal erhöhte de novo-mRNA-Synthese mit verminderter Lernleistung in Zusammenhang steht.
Resumo:
During this thesis a new telemetric recording system has been developed allowing ECoG/EEG recordings in freely behaving rodents (Lapray et al., 2008; Lapray et al., in press). This unit has been shown to not generate any discomfort in the implanted animals and to allow recordings in a wide range of environments. In the second part of this work the developed technique has been used to investigate what cortical activity was related to the process of novelty detection in rats’ barrel cortex. We showed that the detection of a novel object is accompanied in the barrel cortex by a transient burst of activity in the γ frequency range (40-47 Hz) around 200 ms after the whiskers contact with the object (Lapray et al., accepted). This activity was associated to a decrease in the lower range of γ frequencies (30-37 Hz). This network activity may represent the optimal oscillatory pattern for the propagation and storage of new information in memory related structures. The frequency as well as the timing of appearance correspond well with other studies concerning novelty detection related burst of activity in other sensory systems (Barcelo et al., 2006; Haenschel et al., 2000; Ranganath & Rainer, 2003). Here, the burst of activity is well suited to induce plastic and long-lasting modifications in neuronal circuits (Harris et al., 2003). The debate is still open whether synchronised activity in the brain is a part of information processing or an epiphenomenon (Shadlen & Movshon, 1999; Singer, 1999). The present work provides further evidence that neuronal network activity in the γ frequency range plays an important role in the neocortical processing of sensory stimuli and in higher cognitive functions.
Resumo:
Within this work, a particle-polymer surface system is studied with respect to the particle-surface interactions. The latter are governed by micromechanics and are an important aspect for a wide range of industrial applications. Here, a new methodology is developed for understanding the adhesion process and measure the relevant forces, based on the quartz crystal microbalance, QCM. rnThe potential of the QCM technique for studying particle-surface interactions and reflect the adhesion process is evaluated by carrying out experiments with a custom-made setup, consisting of the QCM with a 160 nm thick film of polystyrene (PS) spin-coated onto the quartz and of glass particles, of different diameters (5-20µm), deposited onto the polymer surface. Shifts in the QCM resonance frequency are monitored as a function of the oscillation amplitude. The induced frequency shifts of the 3rd overtone are found to decrease or increase, depending on the particle-surface coupling type and the applied oscillation (frequency and amplitude). For strong coupling the 3rd harmonic decreased, corresponding to an “added mass” on the quartz surface. However, positive frequency shifts are observed in some cases and are attributed to weak-coupling between particle and surface. Higher overtones, i.e. the 5th and 7th, were utilized in order to derive additional information about the interactions taking place. For small particles, the shift for specific overtones can increase after annealing, while for large particle diameters annealing causes a negative frequency shift. The lower overtones correspond to a generally strong-coupling regime with mainly negative frequency shifts observed, while the 7th appears to be sensitive to the contact break-down and the recorded shifts are positive.rnDuring oscillation, the motion of the particles and the induced frequency shift of the QCM are governed by a balance between inertial forces and contact forces. The adherence of the particles can be increased by annealing the PS film at 150°C, which led to the formation of a PS meniscus. For the interpretation, the Hertz, Johnson-Kendall-Roberts, Derjaguin-Müller-Toporov and the Mindlin theory of partial slip are considered. The Mindlin approach is utilized to describe partial slip. When partial slip takes place induced by an oscillating load, a part of the contact ruptures. This results in a decrease of the effective contact stiffness. Additionally, there are long-term memory effects due to the consolidation which along with the QCM vibrations induce a coupling increase. However, the latter can also break the contact, lead to detachment and even surface damage and deformation due to inertia. For strong coupling the particles appear to move with the vibrations and simply act as added effective mass leading to a decrease of the resonance frequency, in agreement with the Sauerbrey equation that is commonly used to calculate the added mass on a QCM). When the system enters the weak-coupling regime the particles are not able to follow the fast movement of the QCM surface. Hence, they effectively act as adding a “spring” with an additional coupling constant and increase the resonance frequency. The frequency shift, however, is not a unique function of the coupling constant. Furthermore, the critical oscillation amplitude is determined, above which particle detach. No movement is detected at much lower amplitudes, while for intermediate values, lateral particle displacement is observed. rnIn order to validate the QCM results and study the particle effects on the surface, atomic force microscopy, AFM, is additionally utilized, to image surfaces and measure surface forces. By studying the surface of the polymer film after excitation and particle removal, AFM imaging helped in detecting three different meniscus types for the contact area: the “full contact”, the “asymmetrical” and a third one including a “homocentric smaller meniscus”. The different meniscus forms result in varying bond intensity between particles and polymer film, which could explain the deviation between number of particles per surface area measured by imaging and the values provided by the QCM - frequency shift analysis. The asymmetric and the homocentric contact types are suggested to be responsible for the positive frequency shifts observed for all three measured overtones, i.e. for the weak-coupling regime, while the “full contact” type resulted in a negative frequency shift, by effectively contributing to the mass increase of the quartz..rnThe interplay between inertia and contact forces for the particle-surface system leads to strong- or weak-coupling, with the particle affecting in three mentioned ways the polymer surface. This is manifested in the frequency shifts of the QCM system harmonics which are used to differentiate between the two interaction types and reflect the overall state of adhesion for particles of different size.rn
Resumo:
A synthetic route was designed for the incorporation of inorganic materials within water-based miniemulsions with a complex and adjustable polymer composition. This involved co-homogenization of two inverse miniemulsions constituting precursors of the desired inorganic salt dispersed within a polymerizable continuous phase, followed by transfer to a direct miniemulsion via addition to an o/w surfactant solution with subsequent homogenization and radical polymerization. To our knowledge, this is the first work done where a polymerizable continuous phase has been used in an inverse (mini)emulsion formation followed by transfer to a direct miniemulsion, followed by polymerization, so that the result is a water-based dispersion. The versatility of the process was demonstrated by the synthesis of different inorganic pigments, but also the use of unconventional mixture of vinylic monomers and epoxy resin as the polymerizable phase (unconventional as a miniemulsion continuous phase but typical combination for coating applications). Zinc phosphate, calcium carbonate and barium sulfate were all successfully incorporated in the polymer-epoxy matrix. The choice of the system was based on a typical functional coatings system, but is not limited to. This system can be extended to incorporate various inorganic and further materials as long as the starting materials are water-soluble or hydrophilic. rnThe hybrid zinc phosphate – polymer water-based miniemulsion prepared by the above route was then applied to steel panels using autodeposition process. This is considered the first autodeposition coatings process to be carried out from a miniemulsion system containing zinc phosphate particles. Those steel panels were then tested for corrosion protection using salt spray tests. Those corrosion tests showed that the hybrid particles can protect substrate from corrosion and even improve corrosion protection, compared to a control sample where corrosion protection was performed at a separate step. Last but not least, it is suggested that corrosion protection mechanism is related to zinc phosphate mobility across the coatings film, which was proven using electron microscopy techniques.
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Resumo:
We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.