873 resultados para multi-component and multi-site adsorption
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
Ziel der Arbeit war die Quantifizierung einer Reihe von Lebenszyklusmerkmalen der beiden tropischen Grasmückenarten Sylvia boehmi und S. lugens (Aves: Sylviidae; frühere Gattung Parisoma). 13 Brutpaare beider Arten wurden von 2000 bis 2002 in Kenia beobachtet. Die Daten wurden mit multivariater Statistik und multistate mark-recapture Modellen ausgewertet. Die Lebenszyklusmerkmale der beiden untersuchten Sylvia Arten sind im Vergleich zu den temperaten Sylvia-Arten gekennzeichnet durch kleine Gelege von zwei Eiern, lange Inkubationsperioden (S. boehmi (b.) 15.0 Tage, S. lugens (l.) 14.5 Tage), lange Nestlingsperioden (b. 12.9 Tage, l. 16.0 Tage), und niedrige Nesterfolgsraten (b. 19.4%, l. 33.2%). Der Zeitraum vom Ausfliegen der Jungen bis zu ihrer Unabhängigkeit war mit 58.5 Tagen bei S. boehmi und 37.5 Tagen bei S. lugens vergleichsweise lang und die Überlebensrate der flüggen Jungen in dieser Zeit war relativ hoch (b. 69.2%, l. 55.4%). Die jährliche Überlebensrate der brütenden adulten Tiere betrug bei S. boehmi 71.2% und bei S. lugens 57.2%. Die Saisonalität des Habitats, bedingt durch Regen- und Trockenzeiten, hatte keinen Einfluss auf die monatliche Überlebensrate im Laufe eines Jahres. Trotz hoher Nestprädationsraten gab es keinen klaren Zusammenhang zwischen Prädation und Fütterungsrate, Nestbewachung oder Neststandort.
Resumo:
The research has included the efforts in designing, assembling and structurally and functionally characterizing supramolecular biofunctional architectures for optical biosensing applications. In the first part of the study, a class of interfaces based on the biotin-NeutrAvidin binding matrix for the quantitative control of enzyme surface coverage and activity was developed. Genetically modified ß-lactamase was chosen as a model enzyme and attached to five different types of NeutrAvidin-functionalized chip surfaces through a biotinylated spacer. All matrices are suitable for achieving a controlled enzyme surface density. Data obtained by SPR are in excellent agreement with those derived from optical waveguide measurements. Among the various protein-binding strategies investigated in this study, it was found that stiffness and order between alkanethiol-based SAMs and PEGylated surfaces are very important. Matrix D based on a Nb2O5 coating showed a satisfactory regeneration possibility. The surface-immobilized enzymes were found to be stable and sufficiently active enough for a catalytic activity assay. Many factors, such as the steric crowding effect of surface-attached enzymes, the electrostatic interaction between the negatively charged substrate (Nitrocefin) and the polycationic PLL-g-PEG/PEG-Biotin polymer, mass transport effect, and enzyme orientation, are shown to influence the kinetic parameters of catalytic analysis. Furthermore, a home-built Surface Plasmon Resonance Spectrometer of SPR and a commercial miniature Fiber Optic Absorbance Spectrometer (FOAS), served as a combination set-up for affinity and catalytic biosensor, respectively. The parallel measurements offer the opportunity of on-line activity detection of surface attached enzymes. The immobilized enzyme does not have to be in contact with the catalytic biosensor. The SPR chip can easily be cleaned and used for recycling. Additionally, with regard to the application of FOAS, the integrated SPR technique allows for the quantitative control of the surface density of the enzyme, which is highly relevant for the enzymatic activity. Finally, the miniaturized portable FOAS devices can easily be combined as an add-on device with many other in situ interfacial detection techniques, such as optical waveguide lightmode spectroscopy (OWLS), the quartz crystal microbalance (QCM) measurements, or impedance spectroscopy (IS). Surface plasmon field-enhanced fluorescence spectroscopy (SPFS) allows for an absolute determination of intrinsic rate constants describing the true parameters that control interfacial hybridization. Thus it also allows for a study of the difference of the surface coupling influences between OMCVD gold particles and planar metal films presented in the second part. The multilayer growth process was found to proceed similarly to the way it occurs on planar metal substrates. In contrast to planar bulk metal surfaces, metal colloids exhibit a narrow UV-vis absorption band. This absorption band is observed if the incident photon frequency is resonant with the collective oscillation of the conduction electrons and is known as the localized surface plasmon resonance (LSPR). LSPR excitation results in extremely large molar extinction coefficients, which are due to a combination of both absorption and scattering. When considering metal-enhanced fluorescence we expect the absorption to cause quenching and the scattering to cause enhancement. Our further study will focus on the developing of a detection platform with larger gold particles, which will display a dominant scattering component and enhance the fluorescence signal. Furthermore, the results of sequence-specific detection of DNA hybridization based on OMCVD gold particles provide an excellent application potential for this kind of cheap, simple, and mild preparation protocol applied in this gold fabrication method. In the final chapter, SPFS was used for the in-depth characterizations of the conformational changes of commercial carboxymethyl dextran (CMD) substrate induced by pH and ionic strength variations were studied using surface plasmon resonance spectroscopy. The pH response of CMD is due to the changes in the electrostatics of the system between its protonated and deprotonated forms, while the ionic strength response is attributed from the charge screening effect of the cations that shield the charge of the carboxyl groups and prevent an efficient electrostatic repulsion. Additional studies were performed using SPFS with the aim of fluorophore labeling the carboxymethyl groups. CMD matrices showed typical pH and ionic strength responses, such as high pH and low ionic strength swelling. Furthermore, the effects of the surface charge and the crosslink density of the CMD matrix on the extent of stimuli responses were investigated. The swelling/collapse ratio decreased with decreasing surface concentration of the carboxyl groups and increasing crosslink density. The study of the CMD responses to external and internal variables will provide valuable background information for practical applications.
Resumo:
On the pathway to synthesizing synthetic model systems for human cartilage, macroinitiators for the ATRP of styrene sulfonate esters with different chain lengths and initiation site densities from 10 % to 100 % were synthesized. Polymer brushes from styrene sulfonate ethyl ester and styrene sulfonate dodecyl ester with varying grafting density, backbone length and side chain length were synthesized and characterized by 1H-NMR, AUC, AFM, TEM, and in the case of the ethyl esters, GPC-MALLS. Polyelectrolyte brushes from styrene sulfonate were synthesized from the corresponding esters. These brushes were characterized in solution (GPC-MALLS, static and dynamic light scattering, SANS, 1H-NMR) and on solid interfaces (AFM and TEM). It was shown that these brushes may form extended aggregates in solution. The aggregation behavior and the size and shape of the aggregates depend on the side chain length and the degree of saponification. For samples with identical backbone and side chain length, but varying degrees of ester hydrolysis, marked differences in the aggregation behavior were observed. A functionalized ATRP macroinitiator with a positively charged head group was synthesized and employed for the synthesis of a functionalized polyelectrolyte brush. These brushes were found to form complexes with negatively charged latex particles and are thus suitable as proteoglycan models in the proteoglycan-hyaluronic acid complex.
Resumo:
Volatile organic compounds play a critical role in ozone formation and drive the chemistry of the atmosphere, together with OH radicals. The simplest volatile organic compound methane is a climatologically important greenhouse gas, and plays a key role in regulating water vapour in the stratosphere and hydroxyl radicals in the troposphere. The OH radical is the most important atmospheric oxidant and knowledge of the atmospheric OH sink, together with the OH source and ambient OH concentrations is essential for understanding the oxidative capacity of the atmosphere. Oceanic emission and / or uptake of methanol, acetone, acetaldehyde, isoprene and dimethyl sulphide (DMS) was characterized as a function of photosynthetically active radiation (PAR) and a suite of biological parameters, in a mesocosm experiment conducted in the Norwegian fjord. High frequency (ca. 1 minute-1) methane measurements were performed using a gas chromatograph - flame ionization detector (GC-FID) in the boreal forests of Finland and the tropical forests of Suriname. A new on-line method (Comparative Reactivity Method - CRM) was developed to directly measure the total OH reactivity (sink) of ambient air. It was observed that under conditions of high biological activity and a PAR of ~ 450 μmol photons m-2 s-1, the ocean acted as a net source of acetone. However, if either of these criteria was not fulfilled then the ocean acted as a net sink of acetone. This new insight into the biogeochemical cycling of acetone at the ocean-air interface has helped to resolve discrepancies from earlier works such as Jacob et al. (2002) who reported the ocean to be a net acetone source (27 Tg yr-1) and Marandino et al. (2005) who reported the ocean to be a net sink of acetone (- 48 Tg yr-1). The ocean acted as net source of isoprene, DMS and acetaldehyde but net sink of methanol. Based on these findings, it is recommended that compound specific PAR and biological dependency be used for estimating the influence of the global ocean on atmospheric VOC budgets. Methane was observed to accumulate within the nocturnal boundary layer, clearly indicating emissions from the forest ecosystems. There was a remarkable similarity in the time series of the boreal and tropical forest ecosystem. The average of the median mixing ratios during a typical diel cycle were 1.83 μmol mol-1 and 1.74 μmol mol-1 for the boreal forest ecosystem and tropical forest ecosystem respectively. A flux value of (3.62 ± 0.87) x 1011 molecules cm-2 s-1 (or 45.5 ± 11 Tg CH4 yr-1 for global boreal forest area) was derived, which highlights the importance of the boreal forest ecosystem for the global budget of methane (~ 600 Tg yr-1). The newly developed CRM technique has a dynamic range of ~ 4 s-1 to 300 s-1 and accuracy of ± 25 %. The system has been tested and calibrated with several single and mixed hydrocarbon standards showing excellent linearity and accountability with the reactivity of the standards. Field tests at an urban and forest site illustrate the promise of the new method. The results from this study have improved current understanding about VOC emissions and uptake from ocean and forest ecosystems. Moreover, a new technique for directly measuring the total OH reactivity of ambient air has been developed and validated, which will be a valuable addition to the existing suite of atmospheric measurement techniques.
Resumo:
Copper and Zn are essential micronutrients for plants, animals, and humans; however, they may also be pollutants if they occur at high concentrations in soil. Therefore, knowledge of Cu and Zn cycling in soils is required both for guaranteeing proper nutrition and to control possible risks arising from pollution.rnThe overall objective of my study was to test if Cu and Zn stable isotope ratios can be used to investigate into the biogeochemistry, source and transport of these metals in soils. The use of stable isotope ratios might be especially suitable to trace long-term processes occurring during soil genesis and transport of pollutants through the soil. In detail, I aimed to answer the questions, whether (1) Cu stable isotopes are fractionated during complexation with humic acid, (2) 65Cu values can be a tracer for soil genetic processes in redoximorphic soils (3) 65Cu values can help to understand soil genetic processes under oxic weathering conditions, and (4) 65Cu and 66Zn values can act as tracers of sources and transport of Cu and Zn in polluted soils.rnTo answer these questions, I ran adsorption experiments at different pH values in the laboratory and modelled Cu adsorption to humic acid. Furthermore, eight soils were sampled representing different redox and weathering regimes of which two were influenced by stagnic water, two by groundwater, two by oxic weathering (Cambisols), and two by podzolation. In all horizons of these soils, I determined selected basic soil properties, partitioned Cu into seven operationally defined fractions and determined Cu concentrations and Cu isotope ratios (65Cu values). Finally, three additional soils were sampled along a deposition gradient at different distances to a Cu smelter in Slovakia and analyzed together with bedrock and waste material from the smelter for selected basic soil properties, Cu and Zn concentrations and 65Cu and 66Zn values.rnMy results demonstrated that (1) Copper was fractionated during adsorption on humic acid resulting in an isotope fractionation between the immobilized humic acid and the solution (65CuIHA-solution) of 0.26 ± 0.11‰ (2SD) and that the extent of fractionation was independent of pH and involved functional groups of the humic acid. (2) Soil genesis and plant cycling causes measurable Cu isotope fractionation in hydromorphic soils. The results suggested that an increasing number of redox cycles depleted 63Cu with increasing depth resulting in heavier 65Cu values. (3) Organic horizons usually had isotopically lighter Cu than mineral soils presumably because of the preferred uptake and recycling of 63Cu by plants. (4) In a strongly developed Podzol, eluviation zones had lighter and illuviation zones heavier 65Cu values because of the higher stability of organo-65Cu complexes compared to organo-63Cu complexes. In the Cambisols and a little developed Podzol, oxic weathering caused increasingly lighter 65Cu values with increasing depth, resulting in the opposite depth trend as in redoximorphic soils, because of the preferential vertical transport of 63Cu. (5) The 66Zn values were fractionated during the smelting process and isotopically light Zn was emitted allowing source identification of Zn pollution while 65Cu values were unaffected by the smelting and Cu emissions isotopically indistinguishable from soil. The 65Cu values in polluted soils became lighter down to a depth of 0.4 m indicating isotope fractionation during transport and a transport depth of 0.4 m in 60 years. 66Zn values had an opposite depth trend becoming heavier with depth because of fractionation by plant cycling, speciation changes, and mixing of native and smelter-derived Zn. rnCopper showed measurable isotope fractionation of approximately 1‰ in unpolluted soils, allowing to draw conclusions on plant cycling, transport, and redox processes occurring during soil genesis and 65Cu and 66Zn values in contaminated soils allow for conclusions on sources (in my study only possible for Zn), biogeochemical behavior, and depth of dislocation of Cu and Zn pollution in soil. I conclude that stable Cu and Zn isotope ratios are a suitable novel tool to trace long-term processes in soils which are difficult to assess otherwise.rn
Resumo:
Analysis of the peak-to-peak output current ripple amplitude for multiphase and multilevel inverters is presented in this PhD thesis. The current ripple is calculated on the basis of the alternating voltage component, and peak-to-peak value is defined by the current slopes and application times of the voltage levels in a switching period. Detailed analytical expressions of peak-to-peak current ripple distribution over a fundamental period are given as function of the modulation index. For all the cases, reference is made to centered and symmetrical switching patterns, generated either by carrier-based or space vector PWM. Starting from the definition and the analysis of the output current ripple in three-phase two-level inverters, the theoretical developments have been extended to the case of multiphase inverters, with emphasis on the five- and seven-phase inverters. The instantaneous current ripple is introduced for a generic balanced multiphase loads consisting of series RL impedance and ac back emf (RLE). Simplified and effective expressions to account for the maximum of the output current ripple have been defined. The peak-to-peak current ripple diagrams are presented and discussed. The analysis of the output current ripple has been extended also to multilevel inverters, specifically three-phase three-level inverters. Also in this case, the current ripple analysis is carried out for a balanced three-phase system consisting of series RL impedance and ac back emf (RLE), representing both motor loads and grid-connected applications. The peak-to-peak current ripple diagrams are presented and discussed. In addition, simulation and experimental results are carried out to prove the validity of the analytical developments in all the cases. The cases with different phase numbers and with different number of levels are compared among them, and some useful conclusions have been pointed out. Furthermore, some application examples are given.
Resumo:
Regenerative medicine and tissue engineering attempt to repair or improve the biological functions of tissues that have been damaged or have ceased to perform their role through three main components: a biocompatible scaffold, cellular component and bioactive molecules. Nanotechnology provide a toolbox of innovative scaffold fabrication procedures in regenerative medicine. In fact, nanotechnology, using manufacturing techniques such as conventional and unconventional lithography, allows fabricating supports with different geometries and sizes as well as displaying physical chemical properties tunable over different length scales. Soft lithography techniques allow to functionalize the support by specific molecules that promote adhesion and control the growth of cells. Understanding cell response to scaffold, and viceversa, is a key issue; here we show our investigation of the essential features required for improving the cell-surface interaction over different scale lengths. The main goal of this thesis has been to devise a nanotechnology-based strategy for the fabrication of scaffolds for tissue regeneration. We made four types of scaffolds, which are able to accurately control cell adhesion and proliferation. For each scaffold, we chose properly designed materials, fabrication and characterization techniques.
Resumo:
The mixing of nanoparticles with polymers to form composite materials has been applied for decades. They combine the advantages of polymers (e.g., elasticity, transparency, or dielectric properties) and inorganic nanoparticles (e.g., specific absorption of light, magneto resistance effects, chemical activity, and catalysis etc.). Nanocomposites exhibit several new characters that single-phase materials do not have. Filling the polymeric matrix with an inorganic material requires its homogeneous distribution in order to achieve the highest possible synergetic effect. To fulfill this requirement, the incompatibility between the filler and the matrix, originating from their opposite polarity, has to be resolved. A very important parameter here is the strength and irreversibility of the adsorption of the surface active compound on the inorganic material. In this work the Isothermal titration calorimetry (ITC) was applied as a method to quantify and investigate the adsorption process and binding efficiencies in organic-inorganic–hybrid-systems by determining the thermodynamic parameters (ΔH, ΔS, ΔG, KB as well as the stoichiometry n). These values provide quantification and detailed understanding of the adsorption process of surface active molecules onto inorganic particles. In this way, a direct correlation between the adsorption strength and structure of the surface active compounds can be achieved. Above all, knowledge of the adsorption mechanism in combination with the structure should facilitate a more rational design into the mainly empirically based production and optimization of nanocomposites.
Resumo:
This thesis explores the effect of chemical nucleoside modification on the physicochemical and biological properties of nucleic acids. Positional alteration on the Watson-Crick edge of purines and pyrimidines, the “C-H” edge of pyrimidines, as well as both the Hoogsteen and sugar edges of purines were attempted by means of copper catalyzed azide-alkyne cycloaddition. For this purpose, nucleic acid building blocks carrying terminal alkynes were synthesized and introduced into oligonucleotides by solid-phase oligonucleotide chemistry. rnOf particular interest was the effect of nucleoside modification on hydrogen bond formation with complementary nucleosides. The attachment of propargyl functionalities onto the N2 of guanosine and the N4 of 5-methylcytosine, respectively, followed by incorporation of the modified analogs into oligonucleotides, was successfully achieved. Temperature dependent UV-absorption melting measurements with duplexes formed between modified oligonucleotides and a variety of complementary strands resulted in melting temperatures for the respective duplexes. As a result, the effect that both the nature and the site of nucleoside modification have on base pairing properties could thus be assisted. rnTo further explore the enzymatic recognition of chemically modified nucleosides, the oligonucleotide containing the N2-modified guanosine derivative on the 5’-end, which was clicked to a fluorescent dye, was subjected to knockdown analyses of the eGFP reporter gene in the presence of increasing concentrations of siRNA duplexes. From these dose-dependent experiments, a clear effect of 5’-labeling on the knockdown efficiency could be seen. In contrast, 3’-labeling was found to be relatively insignificant.rn
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Background Transcatheter aortic valve implantation (TAVI) is a treatment option for high-risk patients with severe aortic stenosis. Previous reports focused on a single device or access site, whereas little is known of the combined use of different devices and access sites as selected by the heart team. The purpose of this study is to investigate clinical outcomes of TAVI using different devices and access sites. Methods A consecutive cohort of 200 patients underwent TAVI with the Medtronic CoreValve Revalving system (Medtronic Core Valve LLC, Irvine, CA; n = 130) or the Edwards SAPIEN valve (Edwards Lifesciences LLC, Irvine, CA; n = 70) implanted by either the transfemoral or transapical access route. Results Device success and procedure success were 99% and 95%, respectively, without differences between devices and access site. All-cause mortality was 7.5% at 30 days, with no differences between valve types or access sites. Using multivariable analysis, low body mass index (<20 kg/m2) (odds ratio [OR] 6.6, 95% CI 1.5-29.5) and previous stroke (OR 4.4, 95% CI 1.2-16.8) were independent risk factors for short-term mortality. The VARC-defined combined safety end point occurred in 18% of patients and was driven by major access site complications (8.0%), life-threatening bleeding (8.5%) or severe renal failure (4.5%). Transapical access emerged as independent predictor of adverse outcome for the Valve Academic Research Consortium–combined safety end point (OR 3.3, 95% CI 1.5-7.1). Conclusion A heart team–based selection of devices and access site among patients undergoing TAVI resulted in high device and procedural success. Low body mass index and history of previous stroke were independent predictors of mortality. Transapical access emerged as a risk factor for the Valve Academic Research Consortium–combined safety end point.
Resumo:
Whitefish, genus Coregonus, show exceptional levels of phenotypic diversity with sympatric morphs occurring in numerous postglacial lakes in the northern hemisphere. Here, we studied the effects of human-induced eutrophication on sympatric whitefish morphs in the Swiss lake, Lake Thun. In particular, we addressed the questions whether eutrophication (i) induced hybridization between two ecologically divergent summer-spawning morphs through a loss of environmental heterogeneity, and (ii) induced rapid adaptive morphological changes through changes in the food web structure. Genetic analysis based on 11 microsatellite loci of 282 spawners revealed that the pelagic and the benthic morph represent highly distinct gene pools occurring at different relative proportions on all seven known spawning sites. Gill raker counts, a highly heritable trait, showed nearly discrete distributions for the two morphs. Multilocus genotypes characteristic of the pelagic morph had more gill rakers than genotypes characteristic of benthic morph. Using Bayesian methods, we found indications of recent but limited introgressive hybridization. Comparisons with historical gill raker data yielded median evolutionary rates of 0.24 haldanes and median selection intensities of 0.27 for this trait in both morphs for 1948-2004 suggesting rapid evolution through directional selection at this trait. However, phenotypic plasticity as an alternative explanation for this phenotypic change cannot be discarded. We hypothesize that both the temporal shifts in mean gill raker counts and the recent hybridization reflect responses to changes in the trophic state of the lake induced by pollution in the 1960s, which created novel selection pressures with respect to feeding niches and spawning site preferences.