642 resultados para fix


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visual system is particularly sensitive to methylmercury (MeHg) exposure and, therefore, provides a useful model for investigating the fundamental mechanisms that direct toxic effects. During a period of 70 days, adult of a freshwater fish species Hoplias malabaricus were fed with fish prey previously labeled with two different doses of methylmercury (0.075 and 0.75 mu g g(-1)) to determine the mercury distribution and morphological changes in the retina. Mercury deposits were found in the photoreceptor layer, in the inner plexiform layer and in the outer plexiform layer, demonstrating a dose-dependent bioaccumulation. The ultrastructure analysis of retina revealed a cellular deterioration in the photoreceptor layer, morphological changes in the inner and outer segments of rods, structural changes in the plasma membrane of rods and double cones, changes in the process of removal of membranous discs and a structural discontinuity. These results lead to the conclusion that methylmercury is able to cross the blood-retina barrier, accumulate in the cells and layers of retina and induce changes in photoreceptors of H. malabaricus even under subchronic exposure. (c) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background The metabolic capacity for nitrogen fixation is known to be present in several prokaryotic species scattered across taxonomic groups. Experimental detection of nitrogen fixation in microbes requires species-specific conditions, making it difficult to obtain a comprehensive census of this trait. The recent and rapid increase in the availability of microbial genome sequences affords novel opportunities to re-examine the occurrence and distribution of nitrogen fixation genes. The current practice for computational prediction of nitrogen fixation is to use the presence of the nifH and/or nifD genes. Results Based on a careful comparison of the repertoire of nitrogen fixation genes in known diazotroph species we propose a new criterion for computational prediction of nitrogen fixation: the presence of a minimum set of six genes coding for structural and biosynthetic components, namely NifHDK and NifENB. Using this criterion, we conducted a comprehensive search in fully sequenced genomes and identified 149 diazotrophic species, including 82 known diazotrophs and 67 species not known to fix nitrogen. The taxonomic distribution of nitrogen fixation in Archaea was limited to the Euryarchaeota phylum; within the Bacteria domain we predict that nitrogen fixation occurs in 13 different phyla. Of these, seven phyla had not hitherto been known to contain species capable of nitrogen fixation. Our analyses also identified protein sequences that are similar to nitrogenase in organisms that do not meet the minimum-gene-set criteria. The existence of nitrogenase-like proteins lacking conserved co-factor ligands in both diazotrophs and non-diazotrophs suggests their potential for performing other, as yet unidentified, metabolic functions. Conclusions Our predictions expand the known phylogenetic diversity of nitrogen fixation, and suggest that this trait may be much more common in nature than it is currently thought. The diverse phylogenetic distribution of nitrogenase-like proteins indicates potential new roles for anciently duplicated and divergent members of this group of enzymes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distrofia muscular de Duchenne é uma doença genética caracterizada por enfraquecimento muscular progressivo e degeneração irreversível, acompanhados por danos sensoriais e neuropsicológicos. Os objetivos do estudo consistiram em avaliar o perfil comportamental de crianças/adolescentes com DMD e a influência do prejuízo motor, da idade no início do uso de cadeira de rodas e da idade no diagnóstico. Participaram 34 pacientes e 20 controles. Os pacientes formaram dois grupos conforme o quociente de inteligência (QI). Os pais responderam ao Inventário de Comportamentos da Infância e da Adolescência. Pacientes com DMD obtiveram escores mais baixos em Atividades e Sociabilidade (p < 0,01; ANCOVA). Os pacientes com QI < 80 apresentaram menores índices de Escolaridade. O prejuízo motor e as idades referentes à cadeira e ao diagnóstico correlacionaram-se com sintomas psiquiátricos/somáticos e problemas escolares. Os achados enfatizam a necessidade de programas educacionais acerca da doença como base para o desenvolvimento de estratégias de inclusão social.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cellular rheology has recently undergone a rapid development with particular attention to the cytoskeleton mechanical properties and its main components - actin filaments, intermediate filaments, microtubules and crosslinked proteins. However it is not clear what are the cellular structural changes that directly affect the cell mechanical properties. Thus, in this work, we aimed to quantify the structural rearrangement of these fibers that may emerge in changes in the cell mechanics. We created an image analysis platform to study smooth muscle cells from different arteries: aorta, mammary, renal, carotid and coronary and processed respectively 31, 29, 31, 30 and 35 cell image obtained by confocal microscopy. The platform was developed in Matlab (MathWorks) and it uses the Sobel operator to determine the actin fiber image orientation of the cell, labeled with phalloidin. The Sobel operator is used as a filter capable of calculating the pixel brightness gradient, point to point, in the image. The operator uses vertical and horizontal convolution kernels to calculate the magnitude and the angle of the pixel intensity gradient. The image analysis followed the sequence: (1) opens a given cells image set to be processed; (2) sets a fix threshold to eliminate noise, based on Otsu's method; (3) detect the fiber edges in the image using the Sobel operator; and (4) quantify the actin fiber orientation. Our first result is the probability distribution II(Δθ) to find a given fiber angle deviation (Δθ) from the main cell fiber orientation θ0. The II(Δθ) follows an exponential decay II(Δθ) = Aexp(-αΔθ) regarding to its θ0. We defined and determined a misalignment index α of the fibers of each artery kind: coronary αCo = (1.72 ‘+ or =’ 0.36)rad POT -1; renal αRe = (1.43 + or - 0.64)rad POT -1; aorta αAo = (1.42 + or - 0.43)rad POT -1; mammary αMa = (1.12 + or - 0.50)rad POT -1; and carotid αCa = (1.01 + or - 0.39)rad POT -1. The α of coronary and carotid are statistically different (p < 0.05) among all analyzed cells. We discussed our results correlating the misalignment index data with the experimental cell mechanical properties obtained by using Optical Magnetic Twisting Cytometry with the same group of cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] The atmospheric CO2 level is rising. Its greenhouse effect is partially mitigated by terrestrial (plants) and marine photosynthetic organisms (algae, phytoplankton), and also by the less-known chemosynthetic bacteria. Within this group of bacteria, nitrifiers have a direct and indirect impact on carbon fixation because, on one hand, they are autotrophs and, on the other, they release inorganic nitrogenous nutrients that feed other photoautotrophs. A new assay which simplifies the measurement of nitrification would improve our knowledge about the ocean’s capacity to fix CO2. Knowing how to cultivate these microbes from marine water samples is a first step towards developing new nitrification detection techniques. During the last six months, we have isolated and cultured a natural assembledge of marine nitrifiers. Our larger objective is to develop a way to enzymatically detect nitrification. However, to do this, we need large quantities of nitrifiers. Consequently, at this point, culturing this marine nitrifier community is our priority. We have learned that pH, nutrient levels, air flow, temperature, low light and sterility are critical for growing healthy nitrifiers. With this knowledge we will now be able to conduct experiments with the nitrifiers and develop the methodology that we seek.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis comes after a strong contribution on the realization of the CMS computing system, which can be seen as a relevant part of the experiment itself. A physics analysis completes the road from Monte Carlo production and analysis tools realization to the final physics study which is the actual goal of the experiment. The topic of physics work of this thesis is the study of tt events fully hadronic decay in the CMS experiment. A multi-jet trigger has been provided to fix a reasonable starting point, reducing the multi-jet sample to the nominal trigger rate. An offline selection has been provided to reduce the S/B ratio. The b-tag is applied to provide a further S/B improvement. The selection is applied to the background sample and to the samples generated at different top quark masses. The top quark mass candidate is reconstructed for all those samples using a kinematic fitter. The resulting distributions are used to build p.d.f.’s, interpolating them with a continuous arbitrary curve. These curves are used to perform the top mass measurement through a likelihood comparison

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An archetype selected over the centuries Adalberto Libera wrote little, showing more inclination to use the project as the only means of verification. This study uses a survey of the project for purely compositional space in relation to the reason that most other returns with continuity and consistency throughout his work. "The fruit of a type selected over centuries", in the words of Libera, is one of the most widely used and repeated spatial archetypes present in the history of architecture, given its nature as defined by a few consolidated elements and precisely defined with characters of geometric precision and absoluteness, the central space is provided, over the course of evolution of architecture, and its construction aspects as well as symbolic, for various uses, from historical period in which it was to coincide with sacred space for excellence, to others in which it lends itself to many different expressive possibilities of a more "secular". The central space was created on assumptions of a constructive character, and the same exact reason has determined the structural changes over the centuries, calling from time to time with advances in technology, the maximum extent possible and the different applications, which almost always have coincided with the reason for the monumental space. But it’s in the Roman world that the reason for the central space is defined from the start of a series of achievements that fix the character in perpetuity. The Pantheon was seen maximum results and, simultaneously, the archetype indispensable, to the point that it becomes difficult to sustain a discussion of the central space that excludes. But the reason the space station has complied, in ancient Rome, just as exemplary, monuments, public spaces or buildings with very different implications. The same Renaissance, on which Wittkower's proving itself once and for all, the nature and interpretation of sacred space station, and thus the symbolic significance of that invaded underlying interpretations related to Humanism, fixing the space-themed drawing it with the study and direct observation by the four-sixteenth-century masters, the ruins that in those years of renewed interest in the classical world, the first big pieces of excavation of ancient Rome brought to light with great surprise of all. Not a case, the choice to investigate the architectural work of Libera through the grounds of the central space. Investigating its projects and achievements, it turns out as the reason invoked particularly evident from the earliest to latest work, crossing-free period of the war which for many authors in different ways, the distinction between one stage and another, or the final miss. The theme and the occasion for Libera always distinct, it is precisely the key through which to investigate her work, to come to discover that the first-in this case the central plan-is the constant underlying all his work, and the second reason that the quota with or at the same time, we will return different each time and always the same Libera, formed on the major works remained from ancient times, and on this building method, means consciously, that the characters of architectural works, if valid, pass the time, and survive the use and function contingent. As for the facts by which to formalize it, they themselves are purely contingent, and therefore available to be transferred from one work to another, from one project to another, using also the loan. Using the same two words-at-issue and it becomes clear now how the theme of this study is the method of Libera and opportunity to the study of the central space in his work. But there is one aspect that, with respect to space a central plan evolves with the progress of the work of Libera on the archetype, and it is the reason behind all the way, just because an area built entirely on reason centric. It 'just the "center" of space that, ultimately, tells us the real progression and the knowledge that over the years has matured and changed in Libera. In the first phase, heavily laden with symbolic superstructure, even if used in a "bribe" from Free-always ill-disposed to sacrifice the idea of architecture to a phantom-center space is just the figure that identifies the icon represents space itself: the cross, the flame or the statue are different representations of the same idea of center built around an icon. The second part of the work of clearing the space station, changed the size of the orders but the demands of patronage, grows and expands the image space centric, celebratory nature that takes and becomes, in a different way, this same symbol . You see, one in all, as the project of "Civiltà Italiana" or symbolic arch are examples of this different attitude. And at the same point of view, you will understand how the two projects formulated on the reuse of the Mausoleum of Augustus is the key to its passage from first to second phase: the Ara Pacis in the second project, making itself the center of the composition "breaks" the pattern of symbolic figure in the center, because it is itself an architecture. And, in doing so, the transition takes place where the building itself-the central space-to become the center of that space that itself creates and determines, by extending the potential and the expressiveness of the enclosure (or cover) that defines the basin centered. In this second series of projects, which will be the apex and the point of "crisis" in the Palazzo dei Congressi all'E42 received and is no longer so, the symbol at the very geometry of space, but space itself and 'action' will be determined within this; action leading a movement, in the case of the Arco simbolico and the "Civiltà Italiana" or, more frequently, or celebration, as in the great Sala dei Recevimenti all’E42, which, in the first project proposal, is represented as a large area populated by people in suits, at a reception, in fact. In other words, in this second phase, the architecture is no longer a mere container, but it represents the shape of space, representing that which "contains". In the next step-determining the knowledge from which mature in their transition to post-war-is one step that radically changes the way centric space, although formally and compositionally Libera continues the work on the same elements, compounds and relationships in a different way . In this last phase Freedom, center, puts the man in human beings, in the two previous phases, and in a latent, were already at the center of the composition, even if relegated to the role of spectators in the first period, or of supporting actors in the second, now the heart of space. And it’s, as we shall see, the very form of being together in the form of "assembly", in its different shades (up to that sacred) to determine the shape of space, and how to relate the parts that combine to form it. The reconstruction of the birth, evolution and development of the central space of the ground in Libera, was born on the study of the monuments of ancient Rome, intersected on fifty years of recent history, honed on the constancy of a method and practice of a lifetime, becomes itself, Therefore, a project, employing the same mechanisms adopted by Libera; the decomposition and recomposition, research synthesis and unity of form, are in fact the structure of this research work. The road taken by Libera is a lesson in clarity and rationality, above all, and this work would uncover at least a fragment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Dissertation behandelt den anomalen Sektor bzw. den Sektor ungerader innerer Parität in mesonischer chiraler Störungsrechnung (mesonische ChPT) bis zur chiralen Ordnung O(q^6). Auf eine Einführung in die Quantenchromodynamik (QCD) und ihrer Verknüpfung mit der chiralen Symmetrie folgt die Betrachtung der mesonischen ChPT im Sektor gerader sowie ungerader innerer Parität bis zur Ordnung O(q^4). Der sogenannte Wess-Zumino-Witten Term, welcher den Einfluss der axialen Anomalie bezogen auf die ChPT widerspiegelt, wird studiert. Anschließend wird die allgemeinste Lagrangedichte der Ordnung O(q^6) im Sektor ungerader innerer Parität detailiert analysiert. Sie enthält in ihrer SU(3)-Formulierung 23 Niederenergiekonstanten(low-energy constant=LEC). Aus Sicht der ChPT sind diese LECs freie Parameter, die auf irgendeine Art und Weise fixiert werden müssen. Es wird herausgearbeitet, bei welchen Prozessen und in welchen Kombinationen die jeweiligen LECs auftreten. Daraufhin wird versucht so viele dieser LECs wie möglich mittels Vektormesondominanz (VMD) sowie experimenteller Daten abzuschätzen und anzupassen. Hierfür wird zuerst die Vorgehensweise einer konsistenten Rechnung im Sektor ungerader innerer Parität bis zur Ordnung O(q^6) studiert, gefolgt von der Berechnung von insgesamt vierzehn geeigneten Prozessen im Rahmen der ChPT bis zur Ordnung O(q^6). Unter Verwendung experimenteller Daten werden dreizehn der LECs angepasst, wobei gegenwärtig nicht bei allen betrachteten Prozessen experimentelle Daten zur Verfügung stehen. Die Ergebnisse werden diskutiert und Unterschiede bzw. Übereinstimmungen mit anderen Rechnungen herausgearbeitet. Zusammenfassend erhält man einen umfassenden Einblick in den Sektor ungerader innerer Parität in mesonischer ChPT bis zur Ordnung O(q^6).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT One of the major ecological challenges on Lake Victoria resources is the existence of “hot spots”, caused by human waste, urban runoff, and industrial effluents. The lake is tending towards eutrophication which is attributed to the increasing human population in its watershed. A report of the levels of perfluorooctane sulfonate and perfluorooctanoic acid in environmental matrices of Lake Victoria is presented, and the management implication of perfluorinated compounds and similar potential organic pollutants examined. Two widely consumed and economically important fish species namely Lates niloticus (Nile perch) and Oreochromis niloticus (Nile tilapia) were obtained from Winam gulf of Lake Victoria, Kenya, and analysed for perfluorooctane sulfonate and perfluorooctanoic acid in muscles and liver using liquid chromatography coupled with mass spectroscopy. Variability in the concentrations of perfluorooctanoic acid or perfluorooctane sulfonate in river waters (range perfluorooctanoic acid 0.4 – 96.4 ng/L and perfluorooctane sulfonate < 0.4 – 13.2 ng/L) was higher than for Lake waters (range perfluorooctanoic acid 0.4 – 11.7 ng/L and perfluorooctane sulfonate < 0.4 – 2.5 ng/L respectively). Significant correlations were tested between perfluorinated compounds levels in sediments, fish and water. Wastewater treatment plants and other anthropogenic sources have been identified as significant sources or pathways for the introduction of perfluoroalkyl compounds into Lake Victoria ecosystem. In this study, elevated concentrations of perfluorooctanoic acid and perfluorooctane sulfonate was found in two wastewater treatment plants (WWTPs) in Kisumu, City of Kenya. An alternative analytical method to liquid chromatography/ mass spectroscopy for analysis of perfluorocarboxylic acids in abiotic and biotic matrices where high concentrations are expected is also presented. Derivatisation of the acid group to form a suitable alkyl ester provided a suitable compound for mass spectroscopy detection coupled to gas chromatography instrumental analysis. The acid is esterified by an alkyl halide i.e benzyl bromide as the alkylating agent for Perfluorocarboxylic acids quantification. The study also involved degradability measurements of emerging perfluorinated surfactants substitutes. The stability of the substitutes of perfluorinated surfactants was tested by employing advanced oxidation processes, followed by conventional tests, among them an automated method based on the manometric respirometry test and standardized fix bed bioreactor [FBBR] on perfluorobutane sulfonate (PFBS), a fluoroethylene polymer, fluorosurfactant (Zonyl), two fluoraliphaticesters (NOVEC ™ FC4430 and NOVEC ™ FC4432) and 10-(trifluoromethoxy) decane-sulfonate. Most of these emmerging surfactants are well-established in the market and have been used in several applications as alternatives to PFOS and PFOA based surfactants. The results of this study can be used as pioneer information for further studies on the sources, behaviour and fate of PFOA and PFOS and other related compounds in both abiotic and biota compartments of Lake Victoria and other lakes. Further an overview in degradation of emerging perfluorinated compounds substitutes is presented. Contribution in method development especially for acid group based fluorosurfactants is presented. The data obtained in this study can particularly be considered when formulating policies and management measures for preservation and sustainability of Lake Victoria resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diese Arbeit besch"aftigt sich mit algebraischen Zyklen auf komplexen abelschen Variet"aten der Dimension 4. Ziel der Arbeit ist ein nicht-triviales Element in $Griff^{3,2}(A^4)$ zu konstruieren. Hier bezeichnet $A^4$ die emph{generische} abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$. Die ersten drei Kapitel sind eine Wiederholung von elementaren Definitionen und Begriffen und daher eine Festlegung der Notation. In diesen erinnern wir an elementare Eigenschaften der von Saito definierten Filtrierungen $F_S$ und $Z$ auf den Chowgruppen (vgl. cite{Sa0} und cite{Sa}). Wir wiederholen auch eine Beziehung zwischen der $F_S$-Filtrierung und der Zerlegung von Beauville der Chowgruppen (vgl. cite{Be2} und cite{DeMu}), welche aus cite{Mu} stammt. Die wichtigsten Begriffe in diesem Teil sind die emph{h"ohere Griffiths' Gruppen} und die emph{infinitesimalen Invarianten h"oherer Ordnung}. Dann besch"aftigen wir uns mit emph{verallgemeinerten Prym-Variet"aten} bez"uglich $(2:1)$ "Uberlagerungen von Kurven. Wir geben ihre Konstruktion und wichtige geometrische Eigenschaften und berechnen den Typ ihrer Polarisierung. Kapitel ref{p-moduli} enth"alt ein Resultat aus cite{BCV} "uber die Dominanz der Abbildung $p(3,2):mathcal R(3,2)longrightarrow mathcal A_4(1,2,2,2)$. Dieses Resultat ist von Relevanz f"ur uns, weil es besagt, dass die generische abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$ eine verallgemeinerte Prym-Variet"at bez"uglich eine $(2:1)$ "Uberlagerung einer Kurve vom Geschlecht $7$ "uber eine Kurve vom Geschlecht $3$ ist. Der zweite Teil der Dissertation ist die eigentliche Arbeit und ist auf folgende Weise strukturiert: Kapitel ref{Deg} enth"alt die Konstruktion der Degeneration von $A^4$. Das bedeutet, dass wir in diesem Kapitel eine Familie $Xlongrightarrow S$ von verallgemeinerten Prym-Variet"aten konstruieren, sodass die klassifizierende Abbildung $Slongrightarrow mathcal A_4(1,2,2,2)$ dominant ist. Desweiteren wird ein relativer Zykel $Y/S$ auf $X/S$ konstruiert zusammen mit einer Untervariet"at $Tsubset S$, sodass wir eine explizite Beschreibung der Einbettung $Yvert _Thookrightarrow Xvert _T$ angeben k"onnen. Das letzte und wichtigste Kapitel enth"ahlt Folgendes: Wir beweisen dass, die emph{ infinitesimale Invariante zweiter Ordnung} $delta _2(alpha)$ von $alpha$ nicht trivial ist. Hier bezeichnet $alpha$ die Komponente von $Y$ in $Ch^3_{(2)}(X/S)$ unter der Beauville-Zerlegung. Damit und mit Hilfe der Ergebnissen aus Kapitel ref{Cohm} k"onnen wir zeigen, dass [ 0neq [alpha ] in Griff ^{3,2}(X/S) . ] Wir k"onnen diese Aussage verfeinern und zeigen (vgl. Theorem ref{a4}) begin{theorem}label{maintheorem} F"ur $sin S$ generisch gilt [ 0neq [alpha _s ]in Griff ^{3,2}(A^4) , ] wobei $A^4$ die generische abelsche Variet"at der Dimension $4$ mit Polarisierung vom Typ $(1,2,2,2)$ ist. end{theorem}