923 resultados para Element free Galerkin method
Resumo:
Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.
Resumo:
The primary objective of this thesis is to obtain a better understanding of the 3D velocity structure of the lithosphere in central Italy. To this end, I adopted the Spectral-Element Method to perform accurate numerical simulations of the complex wavefields generated by the 2009 Mw 6.3 L’Aquila event and by its foreshocks and aftershocks together with some additional events within our target region. For the mainshock, the source was represented by a finite fault and different models for central Italy, both 1D and 3D, were tested. Surface topography, attenuation and Moho discontinuity were also accounted for. Three-component synthetic waveforms were compared to the corresponding recorded data. The results of these analyses show that 3D models, including all the known structural heterogeneities in the region, are essential to accurately reproduce waveform propagation. They allow to capture features of the seismograms, mainly related to topography or to low wavespeed areas, and, combined with a finite fault model, result into a favorable match between data and synthetics for frequencies up to ~0.5 Hz. We also obtained peak ground velocity maps, that provide valuable information for seismic hazard assessment. The remaining differences between data and synthetics led us to take advantage of SEM combined with an adjoint method to iteratively improve the available 3D structure model for central Italy. A total of 63 events and 52 stations in the region were considered. We performed five iterations of the tomographic inversion, by calculating the misfit function gradient - necessary for the model update - from adjoint sensitivity kernels, constructed using only two simulations for each event. Our last updated model features a reduced traveltime misfit function and improved agreement between data and synthetics, although further iterations, as well as refined source solutions, are necessary to obtain a new reference 3D model for central Italy tomography.
Resumo:
Tethered bilayer lipid membranes (tBLMs) are a promising model system for the natural cell membrane. They consist of a lipid bilayer that is covalently coupled to a solid support via a spacer group. In this study, we developed a suitable approach to increase the submembrane space in tBLMs. The challenge is to create a membrane with a lower lipid density in order to increase the membrane fluidity, but to avoid defects that might appear due to an increase in the lateral space within the tethered monolayers. Therefore, various synthetic strategies and different monolayer preparation techniques were examined. Synthetical attempts to achieve a large ion reservoir were made in two directions: increasing the spacer length of the tether lipids and increasing the lateral distribution of the lipids in the monolayer. The first resulted in the synthesis of a small library of tether lipids (DPTT, DPHT and DPOT) characterized by 1H and 13C NMR, FD-MS, ATR, DSC and TGA. The synthetic strategy for their preparation includes synthesis of precursor with a double bond anchor that can be easily modified for different substrates (e.g. metal and metaloxide). Here, the double bond was modified into a thiol group suitable for gold surface. Another approach towards the preparation of homogeneous monolayers with decreased two-dimensional packing density was the synthesis of two novel anchor lipids: DPHDL and DDPTT. DPHDL is “self-diluted” tether lipid containing two lipoic anchor moieties. DDPTT has an extended lipophylic part that should lead to the preparation of diluted, leakage free proximal layers that will facilitate the completion of the bilayer. Our tool-box of tether lipids was completed with two fluorescent labeled lipid precursors with respectively one and two phytanyl chains in the hydrophobic region and a dansyl group as a fluorophore. The use of such fluorescently marked lipids is supposed to give additional information for the lipid distribution on the air-water interface. The Langmuir film balance was used to investigate the monolayer properties of four of the synthesized thiolated anchor lipids. The packing density and mixing behaviour were examined. The results have shown that mixing anchor with free lipids can homogeneously dilute the anchor lipid monolayers. Moreover, an increase in the hydrophylicity (PEG chain length) of the anchor lipids leads to a higher packing density. A decrease in the temperature results in a similar trend. However, increasing the number of phytanyl chains per lipid molecule is shown to decrease the packing density. LB-monolayers based on pure and mixed lipids in different ratio and transfer pressure were tested to form tBLMs with diluted inner layers. A combination of the LB-monolayer transfer with the solvent exchange method accomplished successfully the formation of tBLMs based on pure DPOT. Some preliminary investigations of the electrical sealing properties and protein incorporation of self-assembled DPOT and DDPTT-based tBLMs were conducted. The bilayer formation performed by solvent exchange resulted in membranes with high resistances and low capacitances. The appearance of space beneath the membrane is clearly visible in the impedance spectra expressed by a second RC element. The latter brings the conclusion that the longer spacer in DPOT and the bigger lateral space between the DDPTT molecules in the investigated systems essentially influence the electrical parameters of the membrane. Finally, we could show the functional incorporation of the small ion carrier valinomycin in both types of membranes.
Resumo:
This PhD thesis concerns geochemical constraints on recycling and partial melting of Archean continental crust. A natural example of such processes was found in the Iisalmi area of Central Finland. The rocks from this area are Middle to Late Archean in age and experienced metamorphism and partial melting between 2.7-2.63 Ga. The work is based on extensive field work. It is furthermore founded on bulk rock geochemical data as well as in-situ analyses of minerals. All geochemical data were obtained at the Institute of Geosciences, University of Mainz using X-ray fluorescence, solution ICP-MS and laser ablation-ICP-MS for bulk rock geochemical analyses. Mineral analyses were accomplished by electron microprobe and laser ablation ICP-MS. Fluid inclusions were studied by microscope on a heating-freezing-stage at the Geoscience Center, University Göttingen. Part I focuses on the development of a new analytical method for bulk rock trace element determination by laser ablation-ICP-MS using homogeneous glasses fused from rock powder on an Iridium strip heater. This method is applicable for mafic rock samples whose melts have low viscosities and homogenize quickly at temperatures of ~1200°C. Highly viscous melts of felsic samples prevent melting and homogenization at comparable temperatures. Fusion of felsic samples can be enabled by addition of MgO to the rock powder and adjustment of melting temperature and melting duration to the rock composition. Advantages of the fusion method are low detection limits compared to XRF analyses and avoidance of wet-chemical processing and use of strong acids as in solution ICP-MS as well as smaller sample volumes compared to the other methods. Part II of the thesis uses bulk rock geochemical data and results from fluid inclusion studies for discrimination of melting processes observed in different rock types. Fluid inclusion studies demonstrate a major change in fluid composition from CO2-dominated fluids in granulites to aqueous fluids in TTG gneisses and amphibolites. Partial melts were generated in the dry, CO2-rich environment by dehydration melting reactions of amphibole which in addition to tonalitic melts produced the anhydrous mineral assemblages of granulites (grt + cpx + pl ± amph or opx + cpx + pl + amph). Trace element modeling showed that mafic granulites are residues of 10-30 % melt extraction from amphibolitic precursor rocks. The maximum degree of melting in intermediate granulites was ~10 % as inferred from modal abundances of amphibole, clinopyroxene and orthopyroxene. Carbonic inclusions are absent in upper-amphibolite facies migmatites whereas aqueous inclusion with up to 20 wt% NaCl are abundant. This suggests that melting within TTG gneisses and amphibolites took place in the presence of an aqueous fluid phase that enabled melting at the wet solidus at temperatures of 700-750°C. The strong disruption of pre-metamorphic structures in some outcrops suggests that the maximum amount of melt in TTG gneisses was ~25 vol%. The presence of leucosomes in all rock types is taken as the principle evidence for melt formation. However, mineralogical appearance as well as major and trace element composition of many leucosomes imply that leucosomes seldom represent frozen in-situ melts. They are better considered as remnants of the melt channel network, e.g. ways on which melts escaped from the system. Part III of the thesis describes how analyses of minerals from a specific rock type (granulite) can be used to determine partition coefficients between different minerals and between minerals and melt suitable for lower crustal conditions. The trace element analyses by laser ablation-ICP-MS show coherent distribution among the principal mineral phases independent of rock composition. REE contents in amphibole are about 3 times higher than REE contents in clinopyroxene from the same sample. This consistency has to be taken into consideration in models of lower crustal melting where amphibole is replaced by clinopyroxene in the course of melting. A lack of equilibrium is observed between matrix clinopyroxene / amphibole and garnet porphyroblasts which suggests a late stage growth of garnet and slow diffusion and equilibration of the REE during metamorphism. The data provide a first set of distribution coefficients of the transition metals (Sc, V, Cr, Ni) in the lower crust. In addition, analyses of ilmenite and apatite demonstrate the strong influence of accessory phases on trace element distribution. Apatite contains high amounts of REE and Sr while ilmenite incorporates about 20-30 times higher amounts of Nb and Ta than amphibole. Furthermore, trace element mineral analyses provide evidence for magmatic processes such as melt depletion, melt segregation, accumulation and fractionation as well as metasomatism having operated in this high-grade anatectic area.
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
Nitrogen is an essential nutrient. It is for human, animal and plants a constituent element of proteins and nucleic acids. Although the majority of the Earth’s atmosphere consists of elemental nitrogen (N2, 78 %) only a few microorganisms can use it directly. To be useful for higher plants and animals elemental nitrogen must be converted to a reactive oxidized form. This conversion happens within the nitrogen cycle by free-living microorganisms, symbiotic living Rhizobium bacteria or by lightning. Humans are able to synthesize reactive nitrogen through the Haber-Bosch process since the beginning of the 20th century. As a result food security of the world population could be improved noticeably. On the other side the increased nitrogen input results in acidification and eutrophication of ecosystems and in loss of biodiversity. Negative health effects arose for humans such as fine particulate matter and summer smog. Furthermore, reactive nitrogen plays a decisive role at atmospheric chemistry and global cycles of pollutants and nutritive substances.rnNitrogen monoxide (NO) and nitrogen dioxide (NO2) belong to the reactive trace gases and are grouped under the generic term NOx. They are important components of atmospheric oxidative processes and influence the lifetime of various less reactive greenhouse gases. NO and NO2 are generated amongst others at combustion process by oxidation of atmospheric nitrogen as well as by biological processes within soil. In atmosphere NO is converted very quickly into NO2. NO2 is than oxidized to nitrate (NO3-) and to nitric acid (HNO3), which bounds to aerosol particles. The bounded nitrate is finally washed out from atmosphere by dry and wet deposition. Catalytic reactions of NOx are an important part of atmospheric chemistry forming or decomposing tropospheric ozone (O3). In atmosphere NO, NO2 and O3 are in photosta¬tionary equilibrium, therefore it is referred as NO-NO2-O3 triad. At regions with elevated NO concentrations reactions with air pollutions can form NO2, altering equilibrium of ozone formation.rnThe essential nutrient nitrogen is taken up by plants mainly by dissolved NO3- entering the roots. Atmospheric nitrogen is oxidized to NO3- within soil via bacteria by nitrogen fixation or ammonium formation and nitrification. Additionally atmospheric NO2 uptake occurs directly by stomata. Inside the apoplast NO2 is disproportionated to nitrate and nitrite (NO2-), which can enter the plant metabolic processes. The enzymes nitrate and nitrite reductase convert nitrate and nitrite to ammonium (NH4+). NO2 gas exchange is controlled by pressure gradients inside the leaves, the stomatal aperture and leaf resistances. Plant stomatal regulation is affected by climate factors like light intensity, temperature and water vapor pressure deficit. rnThis thesis wants to contribute to the comprehension of the effects of vegetation in the atmospheric NO2 cycle and to discuss the NO2 compensation point concentration (mcomp,NO2). Therefore, NO2 exchange between the atmosphere and spruce (Picea abies) on leaf level was detected by a dynamic plant chamber system under labo¬ratory and field conditions. Measurements took place during the EGER project (June-July 2008). Additionally NO2 data collected during the ECHO project (July 2003) on oak (Quercus robur) were analyzed. The used measuring system allowed simultaneously determina¬tion of NO, NO2, O3, CO2 and H2O exchange rates. Calculations of NO, NO2 and O3 fluxes based on generally small differences (∆mi) measured between inlet and outlet of the chamber. Consequently a high accuracy and specificity of the analyzer is necessary. To achieve these requirements a highly specific NO/NO2 analyzer was used and the whole measurement system was optimized to an enduring measurement precision.rnData analysis resulted in a significant mcomp,NO2 only if statistical significance of ∆mi was detected. Consequently, significance of ∆mi was used as a data quality criterion. Photo-chemical reactions of the NO-NO2-O3 triad in the dynamic plant chamber’s volume must be considered for the determination of NO, NO2, O3 exchange rates, other¬wise deposition velocity (vdep,NO2) and mcomp,NO2 will be overestimated. No significant mcomp,NO2 for spruce could be determined under laboratory conditions, but under field conditions mcomp,NO2 could be identified between 0.17 and 0.65 ppb and vdep,NO2 between 0.07 and 0.42 mm s-1. Analyzing field data of oak, no NO2 compensation point concentration could be determined, vdep,NO2 ranged between 0.6 and 2.71 mm s-1. There is increasing indication that forests are mainly a sink for NO2 and potential NO2 emissions are low. Only when assuming high NO soil emissions, more NO2 can be formed by reaction with O3 than plants are able to take up. Under these circumstance forests can be a source for NO2.
Resumo:
The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.
Resumo:
Rodents are most useful models to study physiological and pathophysiological processes in early development, because they are born in a relatively immature state. However, only few techniques are available to monitor non-invasively heart frequency and respiratory rate in neonatal rodents without restraining or hindering access to the animal. Here we describe experimental procedures that allow monitoring of heart frequency by electrocardiography (ECG) and breathing rate with a piezoelectric transducer (PZT) element without hindering access to the animal. These techniques can be easily installed and are used in the present study in unrestrained awake and anesthetized neonatal C57/Bl6 mice and Wistar rats between postnatal day 0 and 7. In line with previous reports from awake rodents we demonstrate that heart rate in rats and mice increases during the first postnatal week. Respiratory frequency did not differ between both species, but heart rate was significantly higher in mice than in rats. Further our data indicate that urethane, an agent that is widely used for anesthesia, induces a hypoventilation in neonates whilst heart rate remains unaffected at a dose of 1 g per kg body weight. Of note, hypoventilation induced by urethane was not detected in rats at postnatal 0/1. To verify the detected hypoventilation we performed blood gas analyses. We detected a respiratory acidosis reflected by a lower pH and elevated level in CO2 tension (pCO2) in both species upon urethane treatment. Furthermore we found that metabolism of urethane is different in P0/1 mice and rats and between P0/1 and P6/7 in both species. Our findings underline the usefulness of monitoring basic cardio-respiratory parameters in neonates during anesthesia. In addition our study gives information on developmental changes in heart and breathing frequency in newborn mice and rats and the effects of urethane in both species during the first postnatal week.
Resumo:
BACKGROUND: The objective of this study was to link expression patterns of B-cell-specific Moloney murine leukemia virus integration site 1 (Bmi-1) and p16 to patient outcome (recurrence and survival) in a cohort of 252 patients with oral and oropharyngeal squamous cell cancer (OSCC). METHODS: Expression levels of Bmi-1 and p16 in samples from 252 patients with OSCC were evaluated immunohistochemically using the tissue microarray method. Staining intensity was determined by calculating an intensity reactivity score (IRS). Staining intensity and the localization of expression within tumor cells (nuclear or cytoplasmic) were correlated with overall, disease-specific, and recurrence-free survival. RESULTS: The majority of cancers were localized in the oropharynx (61.1%). In univariate analysis, patients who had OSCC and strong Bmi-1 expression (IRS >10) had worse outcomes compared with patients who had low and moderate Bmi-1 expression (P = .008; hazard ratio [HR], 1.82; 95% confidence interval [CI], 1.167-2.838); this correlation was also observed for atypical cytoplasmic Bmi-1 expression (P = .001; HR, 2.164; 95% CI, 1.389-3.371) and for negative p16 expression (P < .001; HR, 0.292; 95% CI, 0.178-0.477). The combination of both markers, as anticipated, had an even stronger correlation with overall survival (P < .001; HR, 8.485; 95% CI, 4.237-16.994). Multivariate analysis demonstrated significant results for patients with oropharyngeal cancers, but not for patients with oral cavity tumors: Tumor classification (P = .011; HR, 1.838; 95%CI, 1.146-2.947) and the combined marker expression patterns (P < .001; HR, 6.254; 95% CI, 2.869-13.635) were correlated with overall survival, disease-specific survival (tumor classification: P = .002; HR, 2.807; 95% CI, 1.477-5.334; combined markers: P = .002; HR, 5.386; 95% CI, 1.850-15.679), and the combined markers also were correlated with recurrence-free survival (P = .001; HR, 8.943; 95% CI, 2.562-31.220). CONCLUSIONS: Cytoplasmic Bmi-1 expression, an absence of p16 expression, and especially the combination of those 2 predictive markers were correlated negatively with disease-specific and recurrence-free survival in patients with oropharyngeal cancer. Therefore, the current results indicate that these may be applicable as predictive markers in combination with other factors to select patients for more aggressive treatment and follow-up. Cancer 2011;. © 2011 American Cancer Society.
Resumo:
A new approach for the determination of free and total valproic acid in small samples of 140 μL human plasma based on capillary electrophoresis with contactless conductivity detection is proposed. A dispersive liquid-liquid microextraction technique was employed in order to remove biological matrices prior to instrumental analysis. The free valproic acid was determined by isolating free valproic acid from protein-bound valproic acid by ultrafiltration under centrifugation of 100 μL sample. The filtrate was acidified to turn valproic acid into its protonated neutral form and then extracted. The determination of total valproic acid was carried out by acidifying 40 μL untreated plasma to release the protein-bound valproic acid prior to extraction. A solution consisting of 10 mM histidine, 10 mM 3-(N-morpholino)propanesulfonic acid and 10 μM hexadecyltrimethylammonium bromide of pH 6.5 was used as background electrolyte for the electrophoretic separation. The method showed good linearity in the range of 0.4-300 μg/mL with a correlation coefficient of 0.9996. The limit of detection was 0.08 μg/mL, and the reproducibility of the peak area was excellent (RSD=0.7-3.5%, n=3, for the concentration range from 1 to 150 μg/mL). The results for the free and total valproic acid concentration in human plasma were found to be comparable to those obtained with a standard immunoassay. The corresponding correlation coefficients were 0.9847 for free and 0.9521 for total valproic acid.
Resumo:
The final goal of mandibular reconstruction following ablative surgery for oral cancer is often considered to be dental implant-supported oral rehabilitation, for which bone grafts should ideally be placed in a suitable position taking subsequent prosthetic restoration into account. The aim of this study was to evaluate the efficacy of a standardized treatment strategy for mandibular reconstruction according to the size of the bony defect and planned subsequent dental prosthetic rehabilitation. Data of 56 patients, who had undergone such a systematic mandibular fibula free flap reconstruction, were retrospectively analyzed. Early complications were observed in 41.5% of the patients but only in those who had been irradiated. Late complications were found in 38.2%. Dental implant survival rate was 92%, and dental prosthetic treatment has been completed in all classes of bony defects with an overall success rate of 42.9%. The main reasons for failure of the complete dental reconstruction were patients' poor cooperation (30.4%) and tumour recurrence (14.3%) followed by surgery-related factors (10.8%) such as implant failure and an unfavourable intermaxillary relationship between the maxilla and the mandible. A comparison of our results with the literature findings revealed no marked differences in the complication rates and implant survival rates. However, a systematic concept for the reconstructive treatment like the method presented here, plays an important role in the successful completion of dental reconstruction. The success rate could still be improved by some technical progress in implant and bone graft positioning.
Resumo:
The goal of this study was to propose a general numerical analysis methodology to evaluate the magnetic resonance imaging (MRI)-safety of active implants. Numerical models based on the finite element (FE) technique were used to estimate if the normal operation of an active device was altered during MRI imaging. An active implanted pump was chosen to illustrate the method. A set of controlled experiments were proposed and performed to validate the numerical model. The calculated induced voltages in the important electronic components of the device showed dependence with the MRI field strength. For the MRI radiofrequency fields, significant induced voltages of up to 20 V were calculated for a 0.3T field-strength MRI. For the 1.5 and 3.0T MRIs, the calculated voltages were insignificant. On the other hand, induced voltages up to 11 V were calculated in the critical electronic components for the 3.0T MRI due to the gradient fields. Values obtained in this work reflect to the worst case situation which is virtually impossible to achieve in normal scanning situations. Since the calculated voltages may be removed by appropriate protection circuits, no critical problems affecting the normal operation of the pump were identified. This study showed that the proposed methodology helps the identification of the possible incompatibilities between active implants and MR imaging, and can be used to aid the design of critical electronic systems to ensure MRI-safety