968 resultados para Point-set surface
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.
Resumo:
Motorische Bewegungen werden über die visuelle Rückmeldung auf ihre Genauigkeit kontrolliert und ggf. korrigiert. Über einen technischen Eingriff, wie beispielsweise einer Prismenbrille, kann man eine Differenz zwischen optisch wahrgenommener und haptisch erlebter Umwelt erzeugen, um die Fähigkeiten des visuomotorischen Systems zu testen. In dieser Arbeit wurde eine computergestützte Methode entwickelt, eine solche visuomotorische Differenz zu simulieren. Die Versuchspersonen führen eine ballistische Bewegung mit Arm und Hand aus in der Absicht, ein vorgegebenes Ziel zu treffen. Die Trefferpunkte werden durch einen Computer mit Hilfe eines Digitalisierungstablettes aufgenommen. Die visuelle Umwelt, welche den Versuchspersonen präsentiert wird, ist auf einem Monitor dargestellt. Das Monitorabbild – ein Kreuz auf weißem Hintergrund – betrachten die Testpersonen über einen Spiegel. Dieser ist in einem entsprechenden Winkel zwischen Monitor und Digitalisierungstablett angebracht, so dass das Zielbild auf dem Digitalisierungstablett projiziert wird. Die Testpersonen nehmen das Zielkreuz auf dem Digitalisierungstablett liegend wahr. Führt die Versuchsperson eine Zielbewegung aus, können die aufgenommenen Koordinaten als Punkte auf dem Monitor dargestellt werden und die Testperson erhält über diese Punktanzeige ein visuelles Feedback ihrer Bewegung. Der Arbeitsbereich des Digitalisierungstabletts kann über den Computer eingerichtet und so motorische Verschiebungen simuliert werden. Die verschiedenartigen Möglichkeiten dieses Aufbaus wurden zum Teil in Vorversuchen getestet um Fragestellungen, Methodik und technische Einrichtungen aufeinander abzustimmen. Den Hauptversuchen galt besonderes Interesse an der zeitlichen Verzögerung des visuellen Feedbacks sowie dem intermanuellen Transfer. Hierbei ergaben sich folgende Ergebnisse: ● Die Versuchspersonen adaptieren an eine räumlich verschobene Umwelt. Der Adaptationsverlauf lässt sich mit einer Exponentialfunktion mathematisch berechnen und darstellen. ● Dieser Verlauf ist unabhängig von der Art des visuellen Feedbacks. Die Beobachtung der Handbewegung während der Adaptation zeigt die gleiche Zielabfolge wie eine einfache Punktprojektion, die den Trefferort der Bewegung darstellt. ● Der exponentielle Verlauf der Adaptationsbewegung ist unabhängig von den getesteten zeitlichen Verzögerungen des visuellen Feedbacks. ● Die Ergebnisse des Folgeeffektes zeigen, dass bei zunehmender zeitlicher Verzögerung des visuellen Feedbacks während der Adaptationsphase, die Größe des Folgeeffektwertes geringer wird, d.h. die anhaltende Anpassungsleistung an eine visuomotorische Differenz sinkt. ● Die Folgeeffekte weisen individuelle Eigenheiten auf. Die Testpersonen adaptieren verschieden stark an eine simulierte Verschiebung. Ein Vergleich mit den visuomotorischen Herausforderungen im Vorleben der Versuchspersonen ließ vermuten, dass das visuomotorische System des Menschen trainierbar ist und sich - je nach Trainingszustand – unterschiedlich an wahrgenommene Differenzen anpasst. ● Der intermanuelle Transfer konnte unter verschiedenen Bedingungen nachgewiesen werden. ● Ein deutlich stärkerer Folgeeffekt kann beobachtet werden, wenn die wahrgenommene visuomotorische Differenz zwischen Ziel und Trefferpunkt in eine Gehirnhälfte projiziert wird und der Folgeeffekt mit der Hand erfolgt, welche von dieser Hirnhemisphäre gesteuert wird. Der intermanuelle Transfer wird demnach begünstigt, wenn die visuelle Projektion der Fehlerbeobachtung in die Gehirnhälfte erfolgt, die während der Adaptationsphase motorisch passiv ist.
Resumo:
Für die Zukunft wird eine Zunahme an Verkehr prognostiziert, gleichzeitig herrscht ein Mangel an Raum und finanziellen Mitteln, um weitere Straßen zu bauen. Daher müssen die vorhandenen Kapazitäten durch eine bessere Verkehrssteuerung sinnvoller genutzt werden, z.B. durch Verkehrsleitsysteme. Dafür werden räumlich aufgelöste, d.h. den Verkehr in seiner flächenhaften Verteilung wiedergebende Daten benötigt, die jedoch fehlen. Bisher konnten Verkehrsdaten nur dort erhoben werden, wo sich örtlich feste Meßeinrichtungen befinden, jedoch können damit die fehlenden Daten nicht erhoben werden. Mit Fernerkundungssystemen ergibt sich die Möglichkeit, diese Daten flächendeckend mit einem Blick von oben zu erfassen. Nach jahrzehntelangen Erfahrungen mit Fernerkundungsmethoden zur Erfassung und Untersuchung der verschiedensten Phänomene auf der Erdoberfläche wird nun diese Methodik im Rahmen eines Pilotprojektes auf den Themenbereich Verkehr angewendet. Seit Ende der 1990er Jahre wurde mit flugzeuggetragenen optischen und Infrarot-Aufnahmesystemen Verkehr beobachtet. Doch bei schlechten Wetterbedingungen und insbesondere bei Bewölkung, sind keine brauchbaren Aufnahmen möglich. Mit einem abbildenden Radarverfahren werden Daten unabhängig von Wetter- und Tageslichtbedingungen oder Bewölkung erhoben. Im Rahmen dieser Arbeit wird untersucht, inwieweit mit Hilfe von flugzeuggetragenem synthetischem Apertur Radar (SAR) Verkehrsdaten aufgenommen, verarbeitet und sinnvoll angewendet werden können. Nicht nur wird die neue Technik der Along-Track Interferometrie (ATI) und die Prozessierung und Verarbeitung der aufgenommenen Verkehrsdaten ausführlich dargelegt, es wird darüberhinaus ein mit dieser Methodik erstellter Datensatz mit einer Verkehrssimulation verglichen und bewertet. Abschließend wird ein Ausblick auf zukünftige Entwicklungen der Radarfernerkundung zur Verkehrsdatenerfassung gegeben.
Resumo:
The last decade has witnessed an exponential growth of activities in the field of nanoscience and nanotechnology worldwide, driven both by the excitement of understanding new science and by the potential hope for applications and economic impacts. The largest activity in this field up to date has been in the synthesis and characterization of new materials consisting of particles with dimensions in the order of a few nanometers, so-called nanocrystalline materials. [1-8] Semiconductor nanomaterials such as III/V or II/VI compound semiconductors exhibit strong quantum confinement behavior in the size range from 1 to 10 nm. Therefore, preparation of high quality semiconductor nanocrystals has been a challenge for synthetic chemists, leading to the recent rapid progress in delivering a wide variety of semiconducting nanomaterials. Semiconductor nanocrystals, also called quantum dots, possess physical properties distinctly different from those of the bulk material. Typically, in the size range from 1 to 10 nm, when the particle size is changed, the band gap between the valence and the conduction band will change, too. In a simple approximation a particle in a box model has been used to describe the phenomenon[9]: at nanoscale dimensions the degenerate energy states of a semiconductor separate into discrete states and the system behaves like one big molecule. The size-dependent transformation of the energy levels of the particles is called “quantum size-effect”. Quantum confinement of both the electron and hole in all three dimensions leads to an increase in the effective bandgap of the material with decreasing crystallite size. Consequently, both the optical absorption and emission of semiconductor nanaocrystals shift to the blue (higher energies) as the size of the particles gets smaller. This color tuning is well documented for CdSe nanocrystals whose absorption and emission covers almost the whole visible spectral range. As particle sizes become smaller the ratio of surface atoms to those in the interior increases, which has a strong impact on particle properties, too. Prominent examples are the low melting point [8] and size/shape dependent pressure resistance [10] of semiconductor nanocrystals. Given the size dependence of particle properties, chemists and material scientists now have the unique opportunity to change the electronic and chemical properties of a material by simply controlling the particle size. In particular, CdSe nanocrystals have been widely investigated. Mainly due to their size-dependent optoelectronic properties [11, 12] and flexible chemical processibility [13], they have played a distinguished role for a number of seminal studies [11, 12, 14, 15]. Potential technical applications have been discussed, too. [8, 16-27] Improvement of the optoelectronic properties of semiconductor nanocrystals is still a prominent research topic. One of the most important approaches is fabricating composite type-I core-shell structures which exhibit improved properties, making them attractive from both a fundamental and a practical point of view. Overcoating of nanocrystallites with higher band gap inorganic materials has been shown to increase the photoluminescence quantum yields by eliminating surface nonradiative recombination sites. [28] Particles passivated with inorganic shells are more robust than nanocrystals covered by organic ligands only and have greater tolerance to processing conditions necessary for incorporation into solid state structures or for other applications. Some examples of core-shell nanocrystals reported earlier include CdS on CdSe [29], CdSe on CdS, [30], ZnS on CdS, [31] ZnS on CdSe[28, 32], ZnSe on CdSe [33] and CdS/HgS/CdS [34]. The characterization and preparation of a new core-shell structure, CdSe nanocrystals overcoated by different shells (CdS, ZnS), is presented in chapter 4. Type-I core-shell structures as mentioned above greatly improve the photoluminescence quantum yield and chemical and photochemical stability of nanocrystals. The emission wavelengths of type-I core/shell nanocrystals typically only shows a small red-shift when compared to the plain core nanocrystals. [30, 31, 35] In contrast to type-I core-shell nanocrystals, only few studies have been conducted on colloidal type-II core/shell structures [36-38] which are characterized by a staggered alignment of conduction and valence bands giving rise to a broad tunability of absorption and emission wavelengths, as was shown for CdTe/CdSe core-shell nanocrystals. [36] The emission of type-II core/shell nanocrystals mainly originates from the radiative recombination of electron-hole pairs across the core-shell interface leading to a long photoluminescence lifetime. Type-II core/shell nanocrystals are promising with respect to photoconduction or photovoltaic applications as has been discussed in the literature.[39] Novel type-II core-shell structures with ZnTe cores are reported in chapter 5. The recent progress in the shape control of semiconductor nanocrystals opens new fields of applications. For instance, rod shaped CdSe nanocrystals can enhance the photo-electro conversion efficiency of photovoltaic cells, [40, 41] and also allow for polarized emission in light emitting diodes. [42, 43] Shape control of anisotropic nanocrystals can be achieved by the use of surfactants, [44, 45] regular or inverse micelles as regulating agents, [46, 47] electrochemical processes, [48] template-assisted [49, 50] and solution-liquid-solution (SLS) growth mechnism. [51-53] Recently, formation of various CdSe nanocrystal shapes has been reported by the groups of Alivisatos [54] and Peng, [55] respectively. Furthermore, it has been reported by the group of Prasad [56] that noble metal nanoparticles can induce anisotropic growth of CdSe nanocrystals at lower temperatures than typically used in other methods for preparing anisotropic CdSe structures. Although several approaches for anisotropic crystal growth have been reported by now, developing new synthetic methods for the shape control of colloidal semiconductor nanocrystals remains an important goal. Accordingly, we have attempted to utilize a crystal phase control approach for the controllable synthesis of colloidal ZnE/CdSe (E = S, Se, Te) heterostructures in a variety of morphologies. The complex heterostructures obtained are presented in chapter 6. The unique optical properties of nanocrystals make them appealing as in vivo and in vitro fluorophores in a variety of biological and chemical investigations, in which traditional fluorescence labels based on organic molecules fall short of providing long-term stability and simultaneous detection of multiple emission colours [References]. The ability to prepare water soluble nanocrystals with high stability and quantum yield has led to promising applications in cellular labeling, [57, 58] deep-tissue imaging, [59, 60] and assay labeling [61, 62]. Furthermore, appropriately solubilized nanocrystals have been used as donors in fluorescence resonance energy transfer (FRET) couples. [63-65] Despite recent progress, much work still needs to be done to achieve reproducible and robust surface functionalization and develop flexible (bio-) conjugation techniques. Based on multi-shell CdSe nanocrystals, several new solubilization and ligand exchange protocols have been developed which are presented in chapter 7. The organization of this thesis is as follows: A short overview describing synthesis and properties of CdSe nanocrystals is given in chapter 2. Chapter 3 is the experimental part providing some background information about the optical and analytical methods used in this thesis. The following chapters report the results of this work: synthesis and characterization of type-I multi-shell and type-II core/shell nanocrystals are described in chapter 4 and chapter 5, respectively. In chapter 6, a high–yield synthesis of various CdSe architectures by crystal phase control is reported. Experiments about surface modification of nanocrystals are described in chapter 7. At last, a short summary of the results is given in chapter 8.
Resumo:
In this thesis, we investigated the evaporation of sessile microdroplets on different solid substrates. Three major aspects were studied: the influence of surface hydrophilicity and heterogeneity on the evaporation dynamics for an insoluble solid substrate, the influence of external process parameters and intrinsic material properties on microstructuring of soluble polymer substrates and the influence of an increased area to volume ratio in a microfluidic capillary, when evaporation is hindered. In the first part, the evaporation dynamics of pure sessile water drops on smooth self-assembled monolayers (SAMs) of thiols or disulfides on gold on mica was studied. With increasing surface hydrophilicity the drop stayed pinned longer. Thus, the total evaporation time of a given initial drop volume was shorter, since the drop surface, through which the evaporation occurs, stays longer large. Usually, for a single drop the volume decreased linearly with t1.5, t being the evaporation time, for a diffusion-controlled evaporation process. However, when we measured the total evaporation time, ttot, for multiple droplets with different initial volumes, V0, we found a scaling of the form V0 = attotb. The more hydrophilic the substrate was, the more showed the scaling exponent a tendency to an increased value up to 1.6. This can be attributed to an increasing evaporation rate through a thin water layer in the vicinity of the drop. Under the assumption of a constant temperature at the substrate surface a cooling of the droplet and thus a decreased evaporation rate could be excluded as a reason for the different scaling exponent by simulations performed by F. Schönfeld at the IMM, Mainz. In contrast, for a hairy surface, made of dialkyldisulfide SAMs with different chain lengths and a 1:1 mixture of hydrophilic and hydrophobic end groups (hydroxy versus methyl group), the scaling exponent was found to be ~ 1.4. It increased to ~ 1.5 with increasing hydrophilicity. A reason for this observation can only be speculated: in the case of longer hydrophobic alkyl chains the formation of an air layer between substrate and surface might be favorable. Thus, the heat transport to the substrate might be reduced, leading to a stronger cooling and thus decreased evaporation rate. In the second part, the microstructuring of polystyrene surfaces by drops of toluene, a good solvent, was investigated. For this a novel deposition technique was developed, with which the drop can be deposited with a syringe. The polymer substrate is lying on a motorized table, which picks up the pendant drop by an upward motion until a liquid bridge is formed. A consecutive downward motion of the table after a variable delay, i.e. the contact time between drop and polymer, leads to the deposition of the droplet, which can evaporate. The resulting microstructure is investigated in dependence of the processes parameters, i.e. the approach and the retraction speed of the substrate and the delay between them, and in dependence of the intrinsic material properties, i.e. the molar mass and the type of the polymer/solvent system. The principal equivalence with the microstructuring by the ink-jet technique was demonstrated. For a high approach and retraction speed of 9 mm/s and no delay between them, a concave microtopology was observed. In agreement with the literature, this can be explained by a flow of solvent and the dissolved polymer to the rim of the pinned droplet, where polymer is accumulated. This effect is analogue to the well-known formation of ring-like stains after the evaporation of coffee drops (coffee-stain effect). With decreasing retraction speed down to 10 µm/s the resulting surface topology changes from concave to convex. This can be explained with the increasing dissolution of polymer into the solvent drop prior to the evaporation. If the polymer concentration is high enough, gelation occurs instead of a flow to the rim and the shape of the convex droplet is received. With increasing delay time from below 0 ms to 1s the depth of the concave microwells decreases from 4.6 µm to 3.2 µm. However, a convex surface topology could not be obtained, since for longer delay times the polymer sticks to the tip of the syringe. Thus, by changing the delay time a fine-tuning of the concave structure is accomplished, while by changing the retraction speed a principal change of the microtopolgy can be achieved. We attribute this to an additional flow inside the liquid bridge, which enhanced polymer dissolution. Even if the pendant drop is evaporating about 30 µm above the polymer surface without any contact (non-contact mode), concave structures were observed. Rim heights as high as 33 µm could be generated for exposure times of 20 min. The concave structure exclusively lay above the flat polymer surface outside the structure even after drying. This shows that toluene is taken up permanently. The increasing rim height, rh, with increasing exposure time to the solvent vapor obeys a diffusion law of rh = rh0 tn, with n in the range of 0.46 ~ 0.65. This hints at a non-Fickian swelling process. A detailed analysis showed that the rim height of the concave structure is modulated, unlike for the drop deposition. This is due to the local stress relaxation, which was initiated by the increasing toluene concentration in the extruded polymer surface. By altering the intrinsic material parameters i.e. the polymer molar mass and the polymer/solvent combination, several types of microstructures could be formed. With increasing molar mass from 20.9 kDa to 1.44 MDa the resulting microstructure changed from convex, to a structure with a dimple in the center, to concave, to finally an irregular structure. This observation can be explained if one assumes that the microstructuring is dominated by two opposing effects, a decreasing solubility with increasing polymer molar mass, but an increasing surface tension gradient leading to instabilities of Marangoni-type. Thus, a polymer with a low molar mass close or below the entanglement limit is subject to a high dissolution rate, which leads to fast gelation compared to the evaporation rate. This way a coffee-rim like effect is eliminated early and a convex structure results. For high molar masses the low dissolution rate and the low polymer diffusion might lead to increased surface tension gradients and a typical local pile-up of polymer is found. For intermediate polymer masses around 200 kDa, the dissolution and evaporation rate are comparable and the typical concave microtopology is found. This interpretation was supported by a quantitative estimation of the diffusion coefficient and the evaporation rate. For a different polymer/solvent system, polyethylmethacrylate (PEMA)/ethylacetate (EA), exclusively concave structures were found. Following the statements above this can be interpreted with a lower dissolution rate. At low molar masses the concentration of PEMA in EA most likely never reaches the gelation point. Thus, a concave instead of a convex structure occurs. At the end of this section, the optically properties of such microstructures for a potential application as microlenses are studied with laser scanning confocal microscopy. In the third part, the droplet was confined into a glass microcapillary to avoid evaporation. Since here, due to an increased area to volume ratio, the surface properties of the liquid and the solid walls became important, the influence of the surface hydrophilicity of the wall on the interfacial tension between two immiscible liquid slugs was investigated. For this a novel method for measuring the interfacial tension between the two liquids within the capillary was developed. This technique was demonstrated by measuring the interfacial tensions between slugs of pure water and standard solvents. For toluene, n-hexane and chloroform 36.2, 50.9 and 34.2 mN/m were measured at 20°C, which is in a good agreement with data from the literature. For a slug of hexane in contact with a slug of pure water containing ethanol in a concentration range between 0 and 70 (v/v %), a difference of up to 6 mN/m was found, when compared to commercial ring tensiometry. This discrepancy is still under debate.
Resumo:
Il presupposto della ricerca consiste nel riconosciuto valore storico-testimoniale e identitario e in un significativo potenziale d’indicazione pianificatoria e progettuale che detengono in sé i segni del paesaggio rurale tradizionale. Allo stato attuale, sebbene tali valori vengano ampiamente affermati sia nell’ambiente normativo-amministrativo che in quello scientifico, è tuttora riscontrabile una carenza di appropriati metodi e tecniche idonei a creare opportuni quadri conoscitivi per il riconoscimento, la catalogazione e il monitoraggio dei paesaggi rurali tradizionali a supporto di politiche, di piani e di progetti che interessano il territorio extraurbano. La ricerca si prefigge l’obiettivo generale della messa a punto di un set articolato ed originale di strumenti analitici e interpretativi di carattere quantitativo idonei per lo studio delle trasformazioni fisiche dei segni del paesaggio rurale tradizionale e per la valutazione del loro grado di integrità e rilevanza alla scala dell’azienda agricola. Tale obiettivo primario si è tradotto in obiettivi specifici, il cui conseguimento implica il ricorso ad un caso studio territoriale. A tal proposito è stato individuato un campione di 11 aziende agricole assunte quali aree studio, per una superficie complessiva pari all’incirca 200 ha, localizzate nel territorio dell’alta pianura imolese (Emilia-Romagna). L’analisi e l’interpretazione quantitativa delle trasformazioni fisiche avvenute a carico dei sopraccitati segni sono state condotte a decorrere da prima dell’industrializzazione all’attualità e per numerosi istanti temporali. Lo studio si presenta sia come contributo di metodo concernente la lettura diacronica dei caratteri tradizionali spaziali e compositivi del territorio rurale, sia come contributo conoscitivo relativo alle dinamiche evolutive dei paesaggi tradizionali rurali dell’area indagata.
Resumo:
Laser shock peening is a technique similar to shot peening that imparts compressive residual stresses in materials for improving fatigue resistance. The ability to use a high energy laser pulse to generate shock waves, inducing a compressive residual stress field in metallic materials, has applications in multiple fields such as turbo-machinery, airframe structures, and medical appliances. The transient nature of the LSP phenomenon and the high rate of the laser's dynamic make real time in-situ measurement of laser/material interaction very challenging. For this reason and for the high cost of the experimental tests, reliable analytical methods for predicting detailed effects of LSP are needed to understand the potential of the process. Aim of this work has been the prediction of residual stress field after Laser Peening process by means of Finite Element Modeling. The work has been carried out in the Stress Methods department of Airbus Operations GmbH (Hamburg) and it includes investigation on compressive residual stresses induced by Laser Shock Peening, study on mesh sensitivity, optimization and tuning of the model by using physical and numerical parameters, validation of the model by comparing it with experimental results. The model has been realized with Abaqus/Explicit commercial software starting from considerations done on previous works. FE analyses are “Mesh Sensitive”: by increasing the number of elements and by decreasing their size, the software is able to probe even the details of the real phenomenon. However, these details, could be only an amplification of real phenomenon. For this reason it was necessary to optimize the mesh elements' size and number. A new model has been created with a more fine mesh in the trough thickness direction because it is the most involved in the process deformations. This increment of the global number of elements has been paid with an "in plane" size reduction of the elements far from the peened area in order to avoid too high computational costs. Efficiency and stability of the analyses has been improved by using bulk viscosity coefficients, a merely numerical parameter available in Abaqus/Explicit. A plastic rate sensitivity study has been also carried out and a new set of Johnson Cook's model coefficient has been chosen. These investigations led to a more controllable and reliable model, valid even for more complex geometries. Moreover the study about the material properties highlighted a gap of the model about the simulation of the surface conditions. Modeling of the ablative layer employed during the real process has been used to fill this gap. In the real process ablative layer is a super thin sheet of pure aluminum stuck on the masterpiece. In the simulation it has been simply reproduced as a 100µm layer made by a material with a yield point of 10MPa. All those new settings has been applied to a set of analyses made with different geometry models to verify the robustness of the model. The calibration of the model with the experimental results was based on stress and displacement measurements carried out on the surface and in depth as well. The good correlation between the simulation and experimental tests results proved this model to be reliable.
Resumo:
This thesis presents a detailed and successful study of molecular self-assembly on the calcite CaCO3(10-14) surface. One reason for the superior applicability of this particular surface is given by reflecting the well-known growth modes. Layer-by-layer growth, which is a necessity for the formation of templated two-dimensional (2D) molecular structures, is particularly favoured on substrates with a high surface energy. The CaCO3(10-14) surface is among those substrates and, thus, most promising. rnrnAll experiments in this thesis were performed using the non-contact atomic force microscope (NC-AFM) under ultra-high vacuum conditions. The acquisition of drift-free data became in this thesis possible owing to the herein newly developed atom-tracking system. This system features a lateral tip-positioning precision of at least 50pm. Furthermore, a newly developed scan protocol was implemented in this system, which allows for the acquisition of dense three-dimensional (3D) data under room-temperature conditions. An entire 3D data set from a CaCO3(10-14) surface consisting of 85x85x500 pixel is discussed. rnrnThe row-pairing and (2x1) reconstructions of the CaCO3(10-14) surface constitute most interesting research subjects. For both reconstructions, the NC-AFM imaging was classified to a total of 12 contrast modes. Eight of these modes were observed within this thesis, some of them for the first time. Together with literature findings, a total of 10 modes has been observed experimentally to this day. Some contrast modes presented themselves as highly distance-dependent and at least for one contrast mode, a severe tip-termination influence was found. rnrnMost interestingly, the row-pairing reconstruction was found to break a symmetry element of the CaCO3(10-14) surface. With the presence of this reconstruction, the calcite (10-14) surface becomes chiral. From high-resolution NC-AFM data, the identification of the enantiomers is here possible and is presented for one enantiomer in this thesis. rnrnFive studies of self-assembled molecular structures on calcite (10-14) surfaces are presented. Only for one system, namely HBC/CaCO3(10-14), the formation of a molecular bulk structure was observed. This well-known occurence of weak molecule-insulator interaction hinders the investigation of two-dimensional molecular self-assembly. It was, however, possible to force the formation of an island phase for this system upon following a variable-temperature preparation. rnFor the C60/CaCO3(10-14) system it is most notably that no branched island morphologies were found. Instead, the first C60 layer appeared to wet the calcite surface. rnrnIn all studies, the molecules arranged themselves in ordered superstructures. A templating effect due to the underlying calcite substrate was evident for all systems. This templating strikingly led either to the formation of large commensurate superstructures, such as (2x15) with a 14 molecule basis for the C60/CaCO3(10-14) system, or prevented the vast growth of incommensurate molecular motifs, such as the chicken-wire structure in the trimesic acid (TMA)/CaCO3(10-14) system. rnrnThe molecule-molecule and the molecule-substrate interaction was increased upon choosing molecules with carboxylic acid moieties in the third, fourth and fifth study, using terephthalic acid, TMA and helicene molecules. In all these experiments, hydrogen-bonded assemblies were created. rnrnDirected hydrogen bond formation combined with intermolecular pi-pi interaction is employed in the fifth study, where the formation of uni-directional molecular "wires" from single helicene molecules succeeded. Each "wire" is composed of heterochiral helicene pairs, well-aligned along the [01-10] substrate direction and stabilised by pi-pi interaction.
Resumo:
Die salpetrige Säure (HONO) ist eine der reaktiven Stickstoffkomponenten der Atmosphäre und Pedosphäre. Die genauen Bildungswege von HONO, sowie der gegenseitige Austausch von HONO zwischen Atmosphäre und Pedosphäre sind noch nicht vollständig aufgedeckt. Bei der HONO-Photolyse entsteht das Hydroxylradikal (OH) und Stickstoffmonooxid (NO), was die Bedeutsamkeit von HONO für die atmosphärische Photochemie widerspiegelt.rnUm die genannte Bildung von HONO im Boden und dessen anschließenden Austausch mit der Atmosphäre zu untersuchen, wurden Messungen von Bodenproben mit dynamischen Kammern durchgeführt. Im Labor gemessene Emissionsflüsse von Wasser, NO und HONO zeigen, dass die Emission von HONO in vergleichbarem Umfang und im gleichen Bodenfeuchtebereich wie die für NO (von 6.5 bis 56.0 % WHC) stattfindet. Die Höhe der HONO-Emissionsflüsse bei neutralen bis basischen pH-Werten und die Aktivierungsenergie der HONO-Emissionsflüsse führen zu der Annahme, dass die mikrobielle Nitrifikation die Hauptquelle für die HONO-Emission darstellt. Inhibierungsexperimente mit einer Bodenprobe und die Messung einer Reinkultur von Nitrosomonas europaea bestärkten diese Theorie. Als Schlussfolgerung wurde das konzeptionelle Model der Bodenemission verschiedener Stickstoffkomponenten in Abhängigkeit von dem Wasserhaushalt des Bodens für HONO erweitert.rnIn einem weiteren Versuch wurde zum Spülen der dynamischen Kammer Luft mit erhöhtem Mischungsverhältnis von HONO verwendet. Die Messung einer hervorragend charakterisierten Bodenprobe zeigte bidirektionale Flüsse von HONO. Somit können Böden nicht nur als HONO-Quelle, sondern auch je nach Bedingungen als effektive Senke dienen. rnAußerdem konnte gezeigt werden, dass das Verhältnis von HONO- zu NO-Emissionen mit dem pH-Wert des Bodens korreliert. Grund könnte die erhöhte Reaktivität von HONO bei niedrigem pH-Wert und die längere Aufenthaltsdauer von HONO verursacht durch reduzierte Gasdiffusion im Bodenporenraum sein, da ein niedriger pH-Wert mit erhöhter Bodenfeuchte am Maximum der Emission einhergeht. Es konnte gezeigt werden, dass die effektive Diffusion von Gasen im Bodenporenraum und die effektive Diffusion von Ionen in der Bodenlösung die HONO-Produktion und den Austausch von HONO mit der Atmosphäre begrenzen. rnErgänzend zu den Messungen im Labor wurde HONO während der Messkampagne HUMPPA-COPEC 2010 im borealen Nadelwald simultan in der Höhe von 1 m über dem Boden und 2 bis 3 m über dem Blätterdach gemessen. Die Budgetberechnungen für HONO zeigen, dass für HONO sämtliche bekannte Quellen und Senken in Bezug auf die übermächtige HONO-Photolyserate tagsüber vernachlässigbar sind (< 20%). Weder Bodenemissionen von HONO, noch die Photolyse von an Oberflächen adsorbierter Salpetersäure können die fehlende Quelle erklären. Die lichtinduzierte Reduktion von Stickstoffdioxid (NO2) an Oberflächen konnte nicht ausgeschlossen werden. Es zeigte sich jedoch, dass die fehlende Quelle stärker mit der HONO-Photolyserate korreliert als mit der entsprechenden Photolysefrequenz, die proportional zur Photolysefrequenz von NO2 ist. Somit lässt sich schlussfolgern, dass entweder die Photolyserate von HONO überschätzt wird oder dass immer noch eine unbekannte, HONO-Quelle existiert, die mit der Photolyserate sehr stark korreliert. rn rn
Resumo:
Die Gesundheitseffekte von Aerosolpartikeln werden stark von ihren chemischen und physikalischen Eigenschaften und somit den jeweiligen Bildungsprozessen und Quellencharakteristika beeinflusst. Während die Hauptquellen der anthropogenen Partikelemissionen gut untersucht sind, stellen die spezifischen Emissionsmuster zahlreicher kleiner Aerosolquellen, welche lokal und temporär zu einer signifikanten Verschlechterung der Luftqualität beitragen können, ein Forschungsdesiderat dar.rnIn der vorliegenden Arbeit werden in kombinierten Labor- und Feldmessungen durch ein integratives Analysekonzept mittels online (HR-ToF-AMS ) und filterbasierter offline (ATR-FTIR-Spektroskopie ) Messverfahren die weitgehend unbekannten physikalischen und chemischen Eigenschaften der Emissionen besonderer anthropogener Aerosolquellen untersucht. Neben einem Fußballstadion als komplexe Mischung verschiedener Aerosolquellen wie Frittieren und Grillen, Zigarettenrauchen und Pyrotechnik werden die Emissionen durch Feuerwerkskörper, landwirtschaftliche Intensivtierhaltung (Legehennen), Tief- und Straßenbauarbeiten sowie abwasserbürtige Aerosolpartikel in die Studie mit eingebunden. Die primären Partikelemissionen der untersuchten Quellen sind vorrangig durch kleine Partikelgrößen (dp < 1 µm) und somit eine hohe Lungengängigkeit gekennzeichnet. Dagegen zeigen die Aerosolpartikel im Stall der landwirtschaftlichen Intensivtierhaltung sowie die Emissionen durch die Tiefbauarbeiten einen hohen Masseanteil von Partikeln dp > 1 µm. Der Fokus der Untersuchung liegt auf der chemischen Charakterisierung der organischen Partikelbestandteile, welche für viele Quellen die NR-PM1-Emissionen dominieren. Dabei zeigen sich wichtige quellenspezifische Unterschiede in der Zusammensetzung der organischen Aerosolfraktion. Die beim Abbrand von pyrotechnischen Gegenständen freigesetzten sowie die abwasserbürtigen Aerosolpartikel enthalten dagegen hohe relative Gehalte anorganischer Substanzen. Auch können in einigen spezifischen Emissionen Metallverbindungen in den AMS-Massenspektren nachgewiesen werden. Über die Charakterisierung der Emissionsmuster und -dynamiken hinaus werden für einige verschiedenfarbige Rauchpatronen sowie die Emissionen im Stall der Intensivtierhaltung Emissionsfaktoren bestimmt, die zur quantitativen Bilanzierung herangezogen werden können. In einem weiteren Schritt werden anhand der empirischen Daten die analytischen Limitierungen der Aerosolmassenspektrometrie wie die Interferenz organischer Fragmentionen durch (Hydrogen-)Carbonate und mögliche Auswertestrategien zur Überwindung dieser Grenzen vorgestellt und diskutiert.rnEine umfangreiche Methodenentwicklung zur Verbesserung der analytischen Aussagekraft von organischen AMS-Massenspektren zeigt, dass für bestimmte Partikeltypen einzelne Fragmentionen in den AMS-Massenspektren signifikant mit ausgewählten funktionellen Molekülgruppen der FTIR-Absorptionsspektren korrelieren. Bedingt durch ihre fehlende Spezifität ist eine allgemeingültige Interpretation von AMS-Fragmentionen als Marker für verschiedene funktionelle Gruppen nicht zulässig und häufig nur durch die Ergebnisse der komplementären FTIR-Spektroskopie möglich. Des Weiteren wurde die Verdampfung und Ionisation ausgewählter Metallverbindungen im AMS analysiert. Die Arbeit verdeutlicht, dass eine qualitative und quantitative Auswertung dieser Substanzen nicht ohne Weiteres möglich ist. Die Gründe hierfür liegen in einer fehlenden Reproduzierbarkeit des Verdampfungs- und Ionisationsprozesses aufgrund von Matrixeffekten sowie der in Abhängigkeit vorangegangener Analysen (Verdampferhistorie) in der Ionisationskammer und auf dem Verdampfer statt-findenden chemischen Reaktionen.rnDie Erkenntnisse der Arbeit erlauben eine Priorisierung der untersuchten anthropogenen Quellen nach bestimmten Messparametern und stellen für deren Partikelemissionen den Ausgangpunkt einer Risikobewertung von atmosphärischen Folgeprozessen sowie potentiell negativen Auswirkungen auf die menschliche Gesundheit dar. rn
Resumo:
The complete basis set methods CBS-4, CBS-QB3, and CBS-APNO, and the Gaussian methods G2 and G3 were used to calculate the gas phase energy differences between six different carboxylic acids and their respective anions. Two different continuum methods, SM5.42R and CPCM, were used to calculate the free energy differences of solvation for the acids and their anions. Relative pKa values were calculated for each acid using one of the acids as a reference point. The CBS-QB3 and CBS-APNO gas phase calculations, combined with the CPCM/HF/6-31+G(d)//HF/6-31G(d) or CPCM/HF/6-31+G(d)//HF/6-31+G(d) continuum solvation calculations on the lowest energy gas phase conformer, and with the conformationally averaged values, give results accurate to ½ pKa unit. © 2001 American Institute of Physics.
Resumo:
The potential energy surface for the first step of the alkaline hydrolysis of methyl acetate was explored by a variety of methods. The conformational search routine within SPARTAN was used to determine the lowest energy am1 and pm3 structures for the anionic tetrahedral intermediate. Ab initio single point and geometry optimization calculations were performed to determine the lowest energy conformer, and the linear synchronous transition (lst) method was used to provide an initial structure for transition state optimization. Transition states were obtained at the am1, pm3, 3-21G, and 3-21 + G levels of theory. These transition states were compared with the anionic tetrahedral intermediates to examine the assumption that the intermediate is a good model for the transition state. In addition, the Cramer/Truhlar sm3 solvation model was used at the semiempirical level to compare gas phase and aqueous alkaline hydrolysis of methyl acetate.
Resumo:
SUMMARY The aim of this study was to evaluate the influence of surface roughness on surface hardness (Vickers; VHN), elastic modulus (EM), and flexural strength (FLS) of two computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic materials. One hundred sixty-two samples of VITABLOCS Mark II (VMII) and 162 samples of IPS Empress CAD (IPS) were ground according to six standardized protocols producing decreasing surface roughnesses (n=27/group): grinding with 1) silicon carbide (SiC) paper #80, 2) SiC paper #120, 3) SiC paper #220, 4) SiC paper #320, 5) SiC paper #500, and 6) SiC paper #1000. Surface roughness (Ra/Rz) was measured with a surface roughness meter, VHN and EM with a hardness indentation device, and FLS with a three-point bending test. To test for a correlation between surface roughness (Ra/Rz) and VHN, EM, or FLS, Spearman rank correlation coefficients were calculated. The decrease in surface roughness led to an increase in VHN from (VMII/IPS; medians) 263.7/256.5 VHN to 646.8/601.5 VHN, an increase in EM from 45.4/41.0 GPa to 66.8/58.4 GPa, and an increase in FLS from 49.5/44.3 MPa to 73.0/97.2 MPa. For both ceramic materials, Spearman rank correlation coefficients showed a strong negative correlation between surface roughness (Ra/Rz) and VHN or EM and a moderate negative correlation between Ra/Rz and FLS. In conclusion, a decrease in surface roughness generally improved the mechanical properties of the CAD/CAM ceramic materials tested. However, FLS was less influenced by surface roughness than expected.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.