939 resultados para Spectral analysis method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The goal of this thesis is to analyze the possibility of using early-type galaxies to place evolutionary and cosmological constraints, by both disentangling what is the main driver of ETGs evolution between mass and environment, and developing a technique to constrain H(z) and the cosmological parameters studying the ETGs age-redshift relation. The (U-V) rest-frame color distribution is studied as a function of mass and environment for two sample of ETGs up to z=1, extracted from the zCOSMOS survey with a new selection criterion. The color distributions and the slopes of the color-mass and color-environment relations are studied, finding a strong dependence on mass and a minor dependence on environment. The spectral analysis performed on the D4000 and Hδ features gives results validating the previous analysis. The main driver of galaxy evolution is found to be the galaxy mass, the environment playing a subdominant but non negligible role. The age distribution of ETGs is also analyzed as a function of mass, providing strong evidences supporting a downsizing scenario. The possibility of setting cosmological constraints studying the age-redshift relation is studied, discussing the relative degeneracies and model dependencies. A new approach is developed, aiming to minimize the impact of systematics on the “cosmic chronometer” method. Analyzing theoretical models, it is demonstrated that the D4000 is a feature correlated almost linearly with age at fixed metallicity, depending only minorly on the models assumed or on the SFH chosen. The analysis of a SDSS sample of ETGs shows that it is possible to use the differential D4000 evolution of the galaxies to set constraints to cosmological parameters in an almost model-independent way. Values of the Hubble constant and of the dark energy EoS parameter are found, which are not only fully compatible, but also with a comparable error budget with the latest results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary objective of this thesis is to obtain a better understanding of the 3D velocity structure of the lithosphere in central Italy. To this end, I adopted the Spectral-Element Method to perform accurate numerical simulations of the complex wavefields generated by the 2009 Mw 6.3 L’Aquila event and by its foreshocks and aftershocks together with some additional events within our target region. For the mainshock, the source was represented by a finite fault and different models for central Italy, both 1D and 3D, were tested. Surface topography, attenuation and Moho discontinuity were also accounted for. Three-component synthetic waveforms were compared to the corresponding recorded data. The results of these analyses show that 3D models, including all the known structural heterogeneities in the region, are essential to accurately reproduce waveform propagation. They allow to capture features of the seismograms, mainly related to topography or to low wavespeed areas, and, combined with a finite fault model, result into a favorable match between data and synthetics for frequencies up to ~0.5 Hz. We also obtained peak ground velocity maps, that provide valuable information for seismic hazard assessment. The remaining differences between data and synthetics led us to take advantage of SEM combined with an adjoint method to iteratively improve the available 3D structure model for central Italy. A total of 63 events and 52 stations in the region were considered. We performed five iterations of the tomographic inversion, by calculating the misfit function gradient - necessary for the model update - from adjoint sensitivity kernels, constructed using only two simulations for each event. Our last updated model features a reduced traveltime misfit function and improved agreement between data and synthetics, although further iterations, as well as refined source solutions, are necessary to obtain a new reference 3D model for central Italy tomography.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigated at the molecular level protein/solvent interactions and their relevance in protein function through the use of amorphous matrices at room temperature. As a model protein, we used the bacterial photosynthetic reaction center (RC) of Rhodobacter sphaeroides, a pigment protein complex which catalyzes the light-induced charge separation initiating the conversion of solar into chemical energy. The thermal fluctuations of the RC and its dielectric conformational relaxation following photoexcitation have been probed by analyzing the recombination kinetics of the primary charge-separated (P+QA-) state, using time resolved optical and EPR spectroscopies. We have shown that the RC dynamics coupled to this electron transfer process can be progressively inhibited at room temperature by decreasing the water content of RC films or of RC-trehalose glassy matrices. Extensive dehydration of the amorphous matrices inhibits RC relaxation and interconversion among conformational substates to an extent comparable to that attained at cryogenic temperatures in water-glycerol samples. An isopiestic method has been developed to finely tune the hydration level of the system. We have combined FTIR spectral analysis of the combination and association bands of residual water with differential light-minus-dark FTIR and high-field EPR spectroscopy to gain information on thermodynamics of water sorption, and on structure/dynamics of the residual water molecules, of protein residues and of RC cofactors. The following main conclusions were reached: (i) the RC dynamics is slaved to that of the hydration shell; (ii) in dehydrated trehalose glasses inhibition of protein dynamics is most likely mediated by residual water molecules simultaneously bound to protein residues and sugar molecules at the protein-matrix interface; (iii) the local environment of cofactors is not involved in the conformational dynamics which stabilizes the P+QA-; (iv) this conformational relaxation appears to be rather delocalized over several aminoacidic residues as well as water molecules weakly hydrogen-bonded to the RC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis work is focused on the use of selected core-level x-ray spectroscopies to study semiconductor materials of great technological interest and on the development of a new implementation of appearance potential spectroscopy. Core-level spectroscopies can be exploited to study these materials with a local approach since they are sensitive to the electronic structure localized on a chemical species present in the sample examined. This approach, in fact, provides important micro-structural information that is difficult to obtain with techniques sensitive to the average properties of materials. In this thesis work we present a novel approach to the study of semiconductors with core-level spectroscopies based on an original analysis procedure that leads to an insightful understanding of the correlation between the local micro-structure and the spectral features observed. In particular, we studied the micro-structure of Hydrogen induced defects in nitride semiconductors, since the analysed materials show substantial variations of optical and electronic properties as a consequence of H incorporation. Finally, we present a novel implementation of soft x-ray appearance potential spectroscopy, a core-level spectroscopy that uses electrons as a source of excitation and has the great advantage of being an in-house technique. The original set-up illustrated was designed to reach a high signal-to-noise ratio for the acquisition of good quality spectra that can then be analyzed in the framework of the real space full multiple scattering theory. This technique has never been coupled with this analysis approach and therefore our work unite a novel implementation with an original data analysis method, enlarging the field of application of this technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vegetation-cycles are of general interest for many applications. Be it for harvest-predictions, global monitoring of climate-change or as input to atmospheric models.rnrnCommon Vegetation Indices use the fact that for vegetation the difference between Red and Near Infrared reflection is higher than in any other material on Earth’s surface. This gives a very high degree of confidence for vegetation-detection.rnrnThe spectrally resolving data from the GOME and SCIAMACHY satellite-instrumentsrnprovide the chance to analyse finer spectral features throughout the Red and Near Infrared spectrum using Differential Optical Absorption Spectroscopy (DOAS). Although originally developed to retrieve information on atmospheric trace gases, we use it to gain information on vegetation. Another advantage is that this method automatically corrects for changes in the atmosphere. This renders the vegetation-information easily comparable over long time-spans.rnThe first results using previously available reference spectra were encouraging, but also indicated substantial limitations of the available reflectance spectra of vegetation. This was the motivation to create new and more suitable vegetation reference spectra within this thesis.rnThe set of reference spectra obtained is unique in its extent and also with respect to its spectral resolution and the quality of the spectral calibration. For the first time, this allowed a comprehensive investigation of the high-frequency spectral structures of vegetation reflectance and of their dependence on the viewing geometry.rnrnThe results indicate that high-frequency reflectance from vegetation is very complex and highly variable. While this is an interesting finding in itself, it also complicates the application of the obtained reference spectra to the spectral analysis of satellite observations.rnrnThe new set of vegetation reference spectra created in this thesis opens new perspectives for research. Besides refined satellite analyses, these spectra might also be used for applications on other platforms such as aircraft. First promising studies have been presented in this thesis, but the full potential for the remote sensing of vegetation from satellite (or aircraft) could bernfurther exploited in future studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of my PhD thesis has been to face the issue of retrieving a three dimensional attenuation model in volcanic areas. To this purpose, I first elaborated a robust strategy for the analysis of seismic data. This was done by performing several synthetic tests to assess the applicability of spectral ratio method to our purposes. The results of the tests allowed us to conclude that: 1) spectral ratio method gives reliable differential attenuation (dt*) measurements in smooth velocity models; 2) short signal time window has to be chosen to perform spectral analysis; 3) the frequency range over which to compute spectral ratios greatly affects dt* measurements. Furthermore, a refined approach for the application of spectral ratio method has been developed and tested. Through this procedure, the effects caused by heterogeneities of propagation medium on the seismic signals may be removed. The tested data analysis technique was applied to the real active seismic SERAPIS database. It provided a dataset of dt* measurements which was used to obtain a three dimensional attenuation model of the shallowest part of Campi Flegrei caldera. Then, a linearized, iterative, damped attenuation tomography technique has been tested and applied to the selected dataset. The tomography, with a resolution of 0.5 km in the horizontal directions and 0.25 km in the vertical direction, allowed to image important features in the off-shore part of Campi Flegrei caldera. High QP bodies are immersed in a high attenuation body (Qp=30). The latter is well correlated with low Vp and high Vp/Vs values and it is interpreted as a saturated marine and volcanic sediments layer. High Qp anomalies, instead, are interpreted as the effects either of cooled lava bodies or of a CO2 reservoir. A pseudo-circular high Qp anomaly was detected and interpreted as the buried rim of NYT caldera.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In der Form von Nanokapseln (AmB-HST), Nanoemulsion beziehungsweise multilamellaren Vesikeln (MLV) wurden drei Amphotericin-B-Formulierungen für die orale Applikation entwickelt, charakterisiert und verglichen. Die neuartige homogene Nanokapsel-Formulierung des hydrophoben Polyen-Antimykotikums Amphotericin B wurde in Analogie zu einem für Simvastatin und andere Arzneistoffe etablierten Prozess aus der Reinsubstanz, Lezithin und Gelatine mit Hilfe des HST-Verfahrens hergestellt. Photometrische Untersuchungen zeigten, dass das Endprodukt aus Monomeren aufgebaut ist. Mittels Mikroskopie ließen sich die Aggregate vor der Umhüllung mit Lezithin und Gelatine im Ausgangsmaterial als individuelle kugelförmige Arzneistoffpartikel darstellen. Strukturuntersuchungen mit dynamischer licht streuung (DLS) zeigten eine enge Größenverteilung der verkapselten Partikel von ca. 1 µm. Die Struktur der Hülle der HST-Partikel wurde erstmalig mit Neutronenstreuung unter Verwendung der Deuterium-basierten Lösungsmittel kontrastmethode aufgeklärt. Durch die teilweise Kontrastmaskierung des Partikelkerns bei der Neutronenstreuung konnte die Lezithin-Gelatine-Hülle als eine dünne, 5,64 ± 0.18 nm dicke Schicht aufgelöst werden, welche der biologischen Lipidmembran ähnlich, im Vergleich aber geringfügig größer ist. Dieses Resultat eröffnet Wege für die Optimierung der Formulierung von pharmazeutischen Nanopartikeln, z.B. durch Oberflächenmodifizierungen. Weitere Untersuchungen mittels Kleinwinkelneutronenstreuung unter Verwendung der D-Kontrastvariation deuten darauf hin, dass die Komponenten der Nanokapseln nicht den gleichen Masseschwerpunkt haben, sondern asymmetrisch aufgebaut sind und dass die stärker streuenden Domänen weiter außen liegen. Die Partikel sind im Vergleich zu Liposomen dichter. In-Vitro Freisetzungsstudien belegen das Solubilisierungsvermögen des HST-Systems, wonach die Freisetzung des Arzneistoffes aus der Formulierung zu allen gemessenen Zeitpunkten höher als diejenige der Reinsubstanz war. rnDie Nanoemulsion-Formulierung von Amphotericin B wurde mit einem Öl und Tensid system, jedoch mit unterschiedlichen Co-Solvenzien, erfolgreich entwickelt. Gemäß der Bestimmung der Löslichkeit in verschiedenen Hilfsstoffen erwies sich der Arzneistoff Amphotericin B als nicht-lipophil, gleichzeitig aber auch als nicht-hydrophil. Die zur Ermittlung der für die Emulsionsbildung notwendigen Hilfstoffkonzentrationen erstellten ternären Diagramme veranschaulichten, dass hohe Öl- und Tensidgehalte zu keiner Emulsionsbildung führten. Dementsprechend betrug der höchste Ölgehalt 10%. Die Tröpfchengröße wuchs mit zunehmender Tensidkonzentration, wobei die Co-Solventmenge der Propylenglykol-haltigen Nanoemulsion indirekt verringert wurde. Für die Transcutol®P-haltige Nanoemulsion hingegen wurde das Gegenteil beobachtet, nämlich eine Abnahme der Tröpfchengröße bei steigenden Tensidkonzentrationen. Durch den Einschluss des Arzneistoffes wurde nicht die Viskosität der Formulierung, sondern die Tröpfchengröße beeinflusst. Der Wirkstoffeinschluss führte zu höheren Tröpfchengrößen. Mit zunehmender Propylenglykolkonzentration wurde der Wirkstoffgehalt erhöht, mit zunehmender Transcutol®P-Konzentration dagegen vermindert. UV/VIS-spektroskopische Analysen deuten darauf hin, dass in beiden Formulierungen Amphotericin B als Monomer vorliegt. Allerdings erwiesen sich die Formulierungen Caco-2-Zellen und humanen roten Blutkörperchen gegenüber als toxisch. Da die Kontrollproben eine höhere Toxizität als die wirkstoffhaltigen Formulierungen zeigten, ist die Toxizität nicht nur auf Amphotericin, sondern auch auf die Hilfsstoffe zurückzuführen. Die solubilisierte Wirkstoffmenge ist in beiden Formulierungen nicht ausreichend im Hinblick auf die eingesetzte Menge an Hilfsstoff nach WHO-Kriterien. Gemäß diesen Untersuchungen erscheinen die Emulsions-Formulierungen für die orale Gabe nicht geeignet. Dennoch sind Tierstudien notwendig, um den Effekt bei Tieren sowie die systemisch verfügbare Wirkstoffmenge zu ermitteln. Dies wird bestandskräftige Schlussfolgerungen bezüglich der Formulierung und Aussagen über mögliche Perspektiven erlauben. Nichtsdestotrotz sind die Präkonzentrate sehr stabil und können bei Raumtemperatur gelagert werden.rnDie multilamellar-vesikulären Formulierungen von Amphotericin B mit ungesättigten und gesättigten neutralen Phospholipiden und Cholesterin wurden erfolgreich entwickelt und enthielten nicht nur Vesikel, sondern auch zusätzliche Strukturen bei zunehmender Cholesterinkonzentration. Mittels Partikelgrößenanalyse wurden bei den Formulierungen mit gesättigten Lipiden Mikropartikel detektiert, was abhängig von der Alkylkettenlänge war. Mit dem ungesättigten Lipid (DOPC) konnten hingegen Nanopartikel mit hinreichender Verkapselung und Partikelgrößenverteilung gebildet werden. Die Ergebnisse der thermischen und FTIR-spektroskopischen Analyse, welche den Einfluss des Arzneistoffes ausschließen ließen, liefern den Nachweis für die mögliche, bereits in der Literatur beschriebene Einlagerung des Wirkstoffs in lipid- und/oder cholesterinreiche Membranen. Mit Hilfe eines linearen Saccharosedichtegradienten konnte die Formulierung in Vesikel und Wirkstoff-Lipid-Komplexe nach bimodaler Verteilung aufgetrennt werden, wobei der Arzneistoff stärker mit den Komplexen als mit den Vesikeln assoziiert ist. Bei den Kleinwinkelneutronenstreu-Experimenten wurde die Methode der Kontrastvariation mit Erfolg angewendet. Dabei konnte gezeigt werden, dass Cholesterol in situ einen Komplex mit Amphotericin B bildet. Diesen Sachverhalt legt unter anderem die beobachtete Differenz in der äquivalenten Streulängendichte der Wirkstoff-Lipid- und Wirkstoff-Lipid-Cholesterin-haltigen kleinen unilamellaren Vesikeln nahe. Das Vorkommen von Bragg-Peaks im Streuprofil weist auf Domänen hin und systematische Untersuchungen zeigten, dass die Anzahl der Domänen mit steigendem Cholesteringehalt zunimmt, ab einem bestimmten Grenzwert jedoch wieder abnimmt. Die Domänen treten vor allem nahe der Außenfläche der Modellmembran auf und bestätigen, dass der Wirkstoff in den Cholesterinreichen Membranen vertikal eingelagert ist. Die Formulierung war sowohl Caco-2-Zellen als auch humanen roten Blutkörperchen gegenüber nicht toxisch und erwies sich unter Berücksichtigung der Aufnahme in Caco-2-Zellen als vielversprechend für die orale Applikation. Die Formulierung zeigt sich somit aussichtsreich und könnte in Tabletten weiterverarbeitet werden. Ein Filmüberzug würde den Wirkstoff gegen die saure Umgebung im Magen schützen. Für die Bestimmung der systemischen Verfügbarkeit der Formulierung sind Tierstudien notwendig. Die entwickelten multilamellaren Formulierungen einschließlich der Wirkstoff-Cholesterin-Komplexe bieten somit gute Aussichten auf die mögliche medizinische Anwendung. rnrn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this clinical trial was to determine the active tactile sensibility of natural teeth and to obtain a statistical analysis method fitting a psychometric function through the observed data points. On 68 complete dentulous test persons (34 males, 34 females, mean age 45.9 ± 16.1 years), one pair of healthy natural teeth each was tested: n = 24 anterior teeth and n = 44 posterior teeth. The computer-assisted, randomized measurement was done by having the subjects bite on thin copper foils of different thickness (5-200 µm) inserted between the teeth. The threshold of active tactile sensibility was defined by the 50% value of correct answers. Additionally, the gradient of the sensibility curve and the support area (90-10% value) as a description of the shape of the sensibility curve were calculated. For modeling the sensibility curve, symmetric and asymmetric functions were used. The mean sensibility threshold was 14.2 ± 12.1 µm. The older the subject, the higher the tactile threshold (r = 0.42, p = 0.0006). The support area was 41.8 ± 43.3 µm. The higher the 50% threshold, the smaller the gradient of the curve and the larger the support area. The curves showing the active tactile sensibility of natural teeth demonstrate a tendency towards asymmetry, so that the active tactile sensibility of natural teeth can mathematically best be described by using the asymmetric Weibull function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Craniosynostosis consists of a premature fusion of the sutures in an infant skull that restricts skull and brain growth. During the last decades, there has been a rapid increase of fundamentally diverse surgical treatment methods. At present, the surgical outcome has been assessed using global variables such as cephalic index, head circumference, and intracranial volume. However, these variables have failed in describing the local deformations and morphological changes that may have a role in the neurologic disorders observed in the patients. This report describes a rigid image registration-based method to evaluate outcomes of craniosynostosis surgical treatments, local quantification of head growth, and indirect intracranial volume change measurements. The developed semiautomatic analysis method was applied to computed tomography data sets of a 5-month-old boy with sagittal craniosynostosis who underwent expansion of the posterior skull with cranioplasty. Quantification of the local changes between pre- and postoperative images was quantified by mapping the minimum distance of individual points from the preoperative to the postoperative surface meshes, and indirect intracranial volume changes were estimated. The proposed methodology can provide the surgeon a tool for the quantitative evaluation of surgical procedures and detection of abnormalities of the infant skull and its development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.