932 resultados para Modified Direct Analysis Method
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The interplay of hydrodynamic and electrostatic forces is of great importance for the understanding of colloidal dispersions. Theoretical descriptions are often based on the so called standard electrokinetic model. This Mean Field approach combines the Stokes equation for the hydrodynamic flow field, the Poisson equation for electrostatics and a continuity equation describing the evolution of the ion concentration fields. In the first part of this thesis a new lattice method is presented in order to efficiently solve the set of non-linear equations for a charge-stabilized colloidal dispersion in the presence of an external electric field. Within this framework, the research is mainly focused on the calculation of the electrophoretic mobility. Since this transport coefficient is independent of the electric field only for small driving, the algorithm is based upon a linearization of the governing equations. The zeroth order is the well known Poisson-Boltzmann theory and the first order is a coupled set of linear equations. Furthermore, this set of equations is divided into several subproblems. A specialized solver for each subproblem is developed, and various tests and applications are discussed for every particular method. Finally, all solvers are combined in an iterative procedure and applied to several interesting questions, for example, the effect of the screening mechanism on the electrophoretic mobility or the charge dependence of the field-induced dipole moment and ion clouds surrounding a weakly charged sphere. In the second part a quantitative data analysis method is developed for a new experimental approach, known as "Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy" (TIR-FCCS). The TIR-FCCS setup is an optical method using fluorescent colloidal particles to analyze the flow field close to a solid-fluid interface. The interpretation of the experimental results requires a theoretical model, which is usually the solution of a convection-diffusion equation. Since an analytic solution is not available due to the form of the flow field and the boundary conditions, an alternative numerical approach is presented. It is based on stochastic methods, i. e. a combination of a Brownian Dynamics algorithm and Monte Carlo techniques. Finally, experimental measurements for a hydrophilic surface are analyzed using this new numerical approach.
Resumo:
Nanopartikuläre Wirkstofftransportsysteme besitzen ein großes Potential für therapeutische Anwendungen. In der vorliegenden Arbeit wurden verschiedene grundlegende Aspekte, die für das erweiterte biologische Verständnis und die Entwicklung weiterer zielgerichteter Strategien zur Pharmakotherapie mit Nanopartikeln und –kapseln notwendig sind, näher untersucht. Experimente zur zellulären Aufnahmefähigkeit (in vitro und ex vivo) wurden mit verschiedenen Nanopartikeln und –kapseln aus diversen Monomeren und biokompatiblen Makromolekülen in immortalisierten Zellkulturlinien, humanen mesenchymalen Stammzellen und Leukozyten durchgeführt und durchflusszytometrisch sowie mittels konfokaler Laser-Raster-Mikroskopie analysiert. Die Einflüsse der Oberflächenfunktionalisierungen der nanopartikulären Systeme, deren toxikologische Effekte sowie der Einfluss von adsorbiertem bovinem Serumalbumin auf funktionalisierten Polystyrol-Nanopartikeln wurden in Bezug auf die zelluläre Aufnahme untersucht.Um die multiplen Wechselwirkungen der Nanopartikel mit Bestandteilen des humanen peripheren Vollblutes zu untersuchen, wurde erfolgreich ein durchflusszytometrisches Analyseverfahren in antikoaguliertem peripherem Vollblut (ex vivo) entwickelt. Es konnte nachgewiesen werden, dass der Einfluss von Calcium-komplexierenden Antikoagulanzien zu einer Verringerung und nicht Li-Heparin zu einer Verstärkung der zellulären Aufnahme von funktionalisierten Polystyrol-Nanopartikeln in diversen Leukozyten führt.Für Folsäure-gekoppelte Hydroxyethylstärke-Nanokapseln (Synthese Frau Dr. Grit Baier) konnte ein größenabhängiger selektiver, Folatrezeptor α vermittelter, zellulärer Aufnahmeweg in HeLa-Zellen nachgewiesen werden.Hydrolysierbare, nicht zytotoxische Polyester-Nanopartikel aus Poly(5,6-Benzo-2-methylen-1,3-dioxepan) (Synthese Herr Dr. Jörg Max Siebert) mit eingebettetem Paclitaxel zeigten in HeLa-Zellen eine vergleichbare pharmakologische Wirkung wie kommerziell erhältliche Paclitaxel-Formulierungen.Die in dieser Arbeit eingesetzten Nanopartikel und Nanokapseln besitzen ein vielfältiges Potential als Wirkstofftransportsysteme. Es zeigte sich, dass Unterschiede bei der Größe, der Größenverteilung, des Polymers sowie der Oberflächenfunktionalisierung der Nanopartikel bedeutende Unterschiede der Zellaufnahme in diversen Zellkulturlinien (in vitro) und Leukozyten in peripherem Vollblut (ex vivo) zur Folge haben.
Resumo:
In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.
Resumo:
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
There are numerous statistical methods for quantitative trait linkage analysis in human studies. An ideal such method would have high power to detect genetic loci contributing to the trait, would be robust to non-normality in the phenotype distribution, would be appropriate for general pedigrees, would allow the incorporation of environmental covariates, and would be appropriate in the presence of selective sampling. We recently described a general framework for quantitative trait linkage analysis, based on generalized estimating equations, for which many current methods are special cases. This procedure is appropriate for general pedigrees and easily accommodates environmental covariates. In this paper, we use computer simulations to investigate the power robustness of a variety of linkage test statistics built upon our general framework. We also propose two novel test statistics that take account of higher moments of the phenotype distribution, in order to accommodate non-normality. These new linkage tests are shown to have high power and to be robust to non-normality. While we have not yet examined the performance of our procedures in the context of selective sampling via computer simulations, the proposed tests satisfy all of the other qualities of an ideal quantitative trait linkage analysis method.
Resumo:
Liquid films, evaporating or non-evaporating, are ubiquitous in nature and technology. The dynamics of evaporating liquid films is a study applicable in several industries such as water recovery, heat exchangers, crystal growth, drug design etc. The theory describing the dynamics of liquid films crosses several fields such as engineering, mathematics, material science, biophysics and volcanology to name a few. Interfacial instabilities typically manifest by the undulation of an interface from a presumed flat state or by the onset of a secondary flow state from a primary quiescent state or both. To study the instabilities affecting liquid films, an evaporating/non-evaporating Newtonian liquid film is subject to a perturbation. Numerical analysis is conducted on configurations of such liquid films being heated on solid surfaces in order to examine the various stabilizing and destabilizing mechanisms that can cause the formation of different convective structures. These convective structures have implications towards heat transfer that occurs via this process. Certain aspects of this research topic have not received attention, as will be obvious from the literature review. Static, horizontal liquid films on solid surfaces are examined for their resistance to long wave type instabilities via linear stability analysis, method of normal modes and finite difference methods. The spatiotemporal evolution equation, available in literature, describing the time evolution of a liquid film heated on a solid surface, is utilized to analyze various stabilizing/destabilizing mechanisms affecting evaporating and non-evaporating liquid films. The impact of these mechanisms on the film stability and structure for both buoyant and non-buoyant films will be examined by the variation of mechanical and thermal boundary conditions. Films evaporating in zero gravity are studied using the evolution equation. It is found that films that are stable to long wave type instabilities in terrestrial gravity are prone to destabilization via long wave instabilities in zero gravity.
Resumo:
Even though complete resection is regarded as the only curative treatment for nonsmall cell lung cancer (NSCLC), >50% of resected patients die from a recurrence or a second primary tumour of the lung within 5 yrs. It remains unclear, whether follow-up in these patients is cost-effective and whether it can improve the outcome due to early detection of recurrent tumour. The benefit of regular follow-up in a consecutive series of 563 patients, who had undergone potentially curative resection for NSCLC at the University Hospital, was analysed. The follow-up consisted of clinical visits and chest radiography according to a standard protocol for up to 10 yrs. Survival rates were estimated using the Kaplan-Meier analysis method and the cost-effectiveness of the follow-up programme was assessed. A total of 23 patients (6.4% of the group with lobectomy) underwent further operation with curative intent for a second pulmonary malignancy. The regular follow-up over a 10-yr period provided the chance for a second curative treatment to 3.8% of all patients. The calculated costs per life-yr gained were 90,000 Swiss Francs. The cost-effectiveness of the follow-up protocol was far above those of comparable large-scale surveillance programmes. Based on these data, the intensity and duration of the follow-up was reduced.
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.
Resumo:
1 Natural soil profiles may be interpreted as an arrangement of parts which are characterized by properties like hydraulic conductivity and water retention function. These parts form a complicated structure. Characterizing the soil structure is fundamental in subsurface hydrology because it has a crucial influence on flow and transport and defines the patterns of many ecological processes. We applied an image analysis method for recognition and classification of visual soil attributes in order to model flow and transport through a man-made soil profile. Modeled and measured saturation-dependent effective parameters were compared. We found that characterizing and describing conductivity patterns in soils with sharp conductivity contrasts is feasible. Differently, solving flow and transport on the basis of these conductivity maps is difficult and, in general, requires special care for representation of small-scale processes.
Resumo:
The tropical region is an area of maximum humidity and serves as the major humidity source of the globe. Among other phenomena, it is governed by the so-called Inter-Tropical Convergence Zone (ITCZ) which is commonly defined by converging low-level winds or enhanced precipitation. Given its importance as a humidity source, we investigate the humidity fields in the tropics in different reanalysis data sets, deduce the climatology and variability and assess the relationship to the ITCZ. Therefore, a new analysis method of the specific humidity distribution is introduced which allows detecting the location of the humidity maximum, the strength and the meridional extent. The results show that the humidity maximum in boreal summer is strongly shifted northward over the warm pool/Asia Monsoon area and the Gulf of Mexico. These shifts go along with a peak in the strength in both areas; however, the extent shrinks over the warm pool/Asia Monsoon area, whereas it is wider over the Gulf of Mexico. In winter, such connections between location, strength and extent are not found. Still, a peak in strength is again identified over the Gulf of Mexico in boreal winter. The variability of the three characteristics is dominated by inter-annual signals in both seasons. The results using ERA-interim data suggest a positive trend in the Gulf of Mexico/Atlantic region from 1979 to 2010, showing an increased northward shift in the recent years. Although the trend is only weakly confirmed by the results using MERRA reanalysis data, it is in phase with a trend in hurricane activity�a possible hint of the importance of the new method on hurricanes. Furthermore, the position of the maximum humidity coincides with one of the ITCZ in most areas. One exception is the western and central Pacific, where the area is dominated by the double ITCZ in boreal winter. Nevertheless, the new method enables us to gain more insight into the humidity distribution, its variability and the relationship to ITCZ characteristics.
Resumo:
OBJECTIVE We sought to evaluate potential reasons given by board-certified doctors for the persistence of adverse events despite efforts to improve patient safety in Switzerland. SUMMARY BACKGROUND DATA In recent years, substantial efforts have been made to improve patient safety by introducing surgical safety checklists to standardise surgeries and team procedures. Still, a high number of adverse events remain. METHODS Clinic directors in operative medicine in Switzerland were asked to answer two questions concerning the reasons for persistence of adverse events, and the advantages and disadvantages of introducing and implementing surgical safety checklists. Of 799 clinic directors, the arguments of 237 (29.7%) were content-analysed using Mayring's content analysis method, resulting in 12 different categories. RESULTS Potential reasons for the persistence of adverse events were mainly seen as being related to the "individual" (126/237, 53.2%), but directors of high-volume clinics identified factors related to the "group and interactions" significantly more often as a reason (60.2% vs 40.2%; p = 0.003). Surgical safety checklists were thought to have positive effects on the "organisational level" (47/237, 19.8%), the "team level" (37/237, 15.6%) and the "patient level" (40/237, 16.9%), with a "lack of willingness to implement checklists" as the main disadvantage (34/237, 14.3%). CONCLUSION This qualitative study revealed the individual as the main player in the persistence of adverse events. Working conditions should be optimised to minimise interface problems in the case of cross-covering of patients, to assure support for students, residents and interns, and to reduce strain. Checklists are helpful on an "organisational level" (e.g., financial benefits, quality assurance) and to clarify responsibilities.
Resumo:
To initiate our clinical trial for chemotherapy protection, I established the retroviral vector system for human MDR1 cDNA gene transfer. The human MDR1 cDNA continued to be expressed in the transduced bone marrow cells after four cohorts of serial transplants, 17 months after the initial transduction and transplant. In addition, we used this retroviral vector pVMDR1 to transduce human bone marrow and peripheral blood CD34$\sp+$ cells on stromal monolayer in the presence of hematopoietic growth factors. These data suggest that the retroviral vector pVMDR1 could modify hematopoietic precursor cells with a capacity for long-term self renewal. Thus, it may be possible to use the MDR1 retroviruses to confer chemotherapeutic protection on human normal hematopoietic precursor cells of ovarian and breast cancer patients in whom high doses of MDR drugs may be required to control the diseases.^ Another promising vector system is recombinant adeno-associated virus (rAAV) vector. An impediment to use rAAV vectors is that production of rAAV vectors for clinical use is extremely cumbersome and labor intensive. First I set up the rAAV vector system in our laboratory and then, I focused on studies related to the production of rAAV vectors for clinical use. By using a self-inactivating retroviral vector carrying a selection marker under the control of the CMV immediate early promoter and an AAV genome with the deletion of both ITRs, I have developed either a transient or a stable method to produce rAAV vectors. These methods involve infection only and can generate high-titer rAAV vectors (up to 2 x 10$\sp5$ cfu/ml of CVL) with much less work.^ Although recombinant adenoviral vectors hardly infect early hematopoietic precursor cells lacking $\alpha\sb v\beta\sb5$ or $\alpha\sb v\beta\sb3$ integrin on their surface, but efficiently infect other cells, we can use these properties of adenoviral vectors for bone marrow purging as well as for development of new viral vectors such as pseudotyped retroviral vectors and rAAV vectors. Replacement of self-inactivating retroviral vectors by recombinant adenoviral vectors will facilitate the above strategies for production of new viral vectors. In order to accomplish these goals, I developed a new method which is much more efficient than the current methods to construct adenoviral vectors. This method involves a cosmid vector system which is utilized to construct the full-length recombinant adenoviral vectors in vitro.^ First, I developed an efficient and flexible method for in vitro construction of the full-length recombinant adenoviral vectors in the cosmid vector system by use of a three-DNA fragment ligation. Then, this system was improved by use of a two-DNA fragment ligation. The cloning capacity of recombinant adenoviral vectors constructed by this method to develop recombinant adenoviral vectors depends on the efficiency of transfection only. No homologous recombination is required for development of infectious adenoviral vectors. Thus, the efficiency of generating the recombinant adenoviral vectors by the cosmid method reported here was much higher than that by the in vitro direct ligation method or the in vivo homologous recombination method reported before. This method of the in vitro construction of recombinant adenoviral vectors in the cosmid vector system may facilitate the development of adenoviral vector for human gene therapy. (Abstract shortened by UMI.) ^
Resumo:
Sound speed as a diagnostic marker for various diseases of human tissue has been of interest for a while. Up to now, mostly transmission ultrasound computed tomography (UCT) was able to detect spatially resolved sound speed, and its promise as a diagnostic tool has been demonstrated. However, UCT is limited to acoustically transparent samples such as the breast. We present a novel technique where spatially resolved detection of sound speed can be achieved using conventional pulse-echo equipment in reflection mode. For this purpose, pulse-echo images are acquired under various transmit beam directions and a two-dimensional map of the sound speed is reconstructed from the changing phase of local echoes using a direct reconstruction method. Phantom results demonstrate that a high spatial resolution (1 mm) and contrast (0.5 % of average sound speed) can be achieved suitable for diagnostic purposes. In comparison to previous reflection-mode based methods, CUTE works also in a situation with only diffuse echoes, and its direct reconstruction algorithm enables real-time application. This makes it suitable as an addition to conventional clinical ultrasound where it has the potential to benefit diagnosis in a multimodal approach. In addition, knowledge of the spatial distribution of sound speed allows full aberration correction and thus improved spatial resolution and contrast of conventional B-mode ultrasound. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.