950 resultados para Univalent Functions with Negative Coefficients


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diclofenac sodium (DS) is a non-steroidal anti-inflammatory drug that is widely prescribed for the treatment of rheumatoid arthritis and post-surgery analgesia. The active pharmaceutical ingredient is the anhydrous form; however, it can also exist in hydrate form. In this context, knowing the properties of the solid state is important and relevant in the pharmaceutical area because they have a significant impact on the solubility, bioavailability, and chemical stability of the drugs. In the present study, data from XRPD, FTIR spectroscopy, and thermal analysis were used for the identification and characterization of DS forms (anhydrous and hydrate). An HPLC method was optimized to evaluate the plasma concentration of DS in rabbits. The optimized method exhibited good linearity over the range 0.1-60 mu g/mL with correlation coefficients of >0.9991. The mean recovery was 100%. Precision and accuracy were determined within acceptable limits. Finally, to compare the pharmacological properties of anhydrous and hydrate DS forms, we investigated their effects in the febrile response induced by lipopolysaccharide from E. coli in rabbits. The results show that the antipyretic effect of anhydrous and hydrate DS forms are similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Surgical resection is the only curative treatment for hilar cholangiocarcinoma. Laparoscopic hepatectomy has been used to treat several types of liver neoplasms. However, technical issues have limited the adoption of laparoscopy for the treatment of hilar cholangiocarcinoma. To date there is only one report of minimally invasive procedure for hilar cholangiocarcinoma in the literature. The present video-assisted procedure shows a laparoscopic resection of hilar cholangiocarcinoma. Patient and Methods: A 43-year-old woman with progressive jaundice due to left-sided hilar cholangiocarcinoma was referred for treatment. The decision was to perform a laparoscopic left hepatectomy with lymphadenectomy and resection of extrahepatic bile ducts. Biliary reconstruction was performed using the hybrid method. Results: Operative time was 300 minutes with minimum blood loss and no need for blood transfusion. Recovery was uneventful, and the patient was discharged on postoperative Day 7. Pathology revealed a well-differentiated cholangiocarcinoma with negative lymph nodes and clear surgical margins. The patient is well with no signs of the disease 18 months after the procedure. Conclusions: Laparoscopic left hepatectomy with lymphadenectomy is safe and feasible in selected patients and when performed by surgeons with expertise in liver surgery and minimally invasive techniques. The use of a hybrid method may be needed for biliary reconstruction, especially in cases where position and size of remnant bile ducts may jeopardize the anastomosis. Further studies are still needed to confirm the benefit of this approach over conventional surgery for hilar cholangiocarcinoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To address the prognostic value of minimal residual disease (MRD) before unrelated cord blood transplantation (UCBT) in children with acute lymphoblastic leukemia (ALL), we analyzed 170 ALL children transplanted in complete remission (CR) after myeloablative conditioning regimen. In all, 72 (43%) were in first CR (CR1), 77 (45%) in second CR (CR2) and 21 (12%) in third CR (CR3). The median interval from MRD quantification to UCBT was 18 days. All patients received single-unit UCBT. Median follow-up was 4 years. Cumulative incidence (CI) of day-60 neutrophil engraftment was 85%. CI of 4 years relapse was 30%, incidence being lower in patients with negative MRD before UCBT (hazard ratio (HR) = 0.4, P = 0.01) and for those transplanted in CR1 and CR2 (HR = 0.3, P = 0.002). Probability of 4 years leukemia-free survival (LFS) was 44%, (56, 44 and 14% for patients transplanted in CR1, CR2 and CR3, respectively (P = 0.0001)). Patients with negative MRD before UCBT had better LFS after UCBT compared with those with positive MRD (54% vs 29%; HR = 2, P = 0.003). MRD assessment before UCBT for children with ALL in remission allows identifying patients at higher risk of relapse after transplantation. Approaches that may decrease relapse incidence in children given UCBT with positive MRD should be investigated to improve final outcomes. Leukemia (2012) 26, 2455-2461; doi:10.1038/leu.2012.123

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To characterize the profile of patients with cochlear implant as proposed by the International Classification of Functioning, Disability and Health for Children and Youth (ICF-CY). Methods: This is a descriptive, crosssectional retrospective study, which examined 30 medical records of patients using the cochlear implant of Centro de Pesquisas Audiológicas. To characterize the profile of the patients, the ICF-CY was used. Regarding the assessment, researchers relied on procedures performed in clinical routine, besides information registered in the medical record. After reviewing the information, it was related to codes from the ICF-CY; with the addition of a qualifier afterwards. Results: Overall, 55 codes from the ICF were related to the instruments to characterize this population. Regarding the Body Functions field, most participants did not have disabilities related to reception and expression of oral language and auditory functions, with only written language disabilities being found. These same findings were observed in the Activity and Participation Field. Regarding environmental factors, noise and the non-availability of technology resources to assist in the auditory comprehension of noise were characterized as a barrier, as well as the absence of speech therapy. Conclusion: This study concluded that most of the participating children showed no deficiency in the body functions, with difficulties being only reported in relation to school performance. Environmental factors (noise, non-availability of technological resources, absence of speech therapy) were characterized as a barrier. The need to expand assessments in the clinical routine was also noted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background HCV is prevalent throughout the world. It is a major cause of chronic liver disease. There is no effective vaccine and the most common therapy, based on Peginterferon, has a success rate of ~50%. The mechanisms underlying viral resistance have not been elucidated but it has been suggested that both host and virus contribute to therapy outcome. Non-structural 5A (NS5A) protein, a critical virus component, is involved in cellular and viral processes. Methods The present study analyzed structural and functional features of 345 sequences of HCV-NS5A genotypes 1 or 3, using in silico tools. Results There was residue type composition and secondary structure differences between the genotypes. In addition, second structural variance were statistical different for each response group in genotype 3. A motif search indicated conserved glycosylation, phosphorylation and myristoylation sites that could be important in structural stabilization and function. Furthermore, a highly conserved integrin ligation site was identified, and could be linked to nuclear forms of NS5A. ProtFun indicated NS5A to have diverse enzymatic and nonenzymatic activities, participating in a great range of cell functions, with statistical difference between genotypes. Conclusion This study presents new insights into the HCV-NS5A. It is the first study that using bioinformatics tools, suggests differences between genotypes and response to therapy that can be related to NS5A protein features. Therefore, it emphasizes the importance of using bioinformatics tools in viral studies. Data acquired herein will aid in clarifying the structure/function of this protein and in the development of antiviral agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this preliminary study was to verify the antibacterial potential of cetylpyridinium chloride (CPC) in root canals infected by Enterococcus faecalis. Forty human maxillary anterior teeth were prepared and inoculated with E. faecalis for 60 days. The teeth were randomly assigned to the following groups: 1: Root canal preparation (RCP) + 0.1% CPC with positive-pressure irrigation (PPI, Conventional, NaviTip®); 2: RCP + 0.2% CPC PPI; 3: RCP + 2.5% NaOCl PPI; 4: RCP + 2.5% NaOCl with negative-pressure irrigation system (NPI, EndoVac®); 5: Positive control; and 6: Negative control. Four teeth of each experimental group were evaluated by culture and 4 by scanning electron microscopy (SEM). In all teeth, the root canals were dried and filled with 17% EDTA (pH 7.2) for 3 min for smear layer removal. Samples from the infected root canals were collected and immersed in 7 mL of Letheen Broth (LB), followed by incubation at 37°C for 48 h. Bacterial growth was analyzed by turbidity of culture medium and then observed with a UV spectrophotometer. The irrigating solutions were further evaluated for antimicrobial effect by an agar diffusion test.The statistical data were treated by means, standard deviation, Kruskal-Wallis test and analysis of variance. Significance level was set at 5%. The results showed the presence of E. faecalis after root canal sanitization. The number of bacteria decreased after the use of CPC. In the agar diffusion test, CPC induced large microbial inhibition zones, similar to 2% chlorhexidine and large than 2.5% NaOCl. In conclusion, cetylpyridinium chloride showed antibacterial potential in endodontic infection with E. faecalis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]We present a new strategy for constructing spline spaces over hierarchical T-meshes with quad- and octree subdivision scheme. The proposed technique includes some simple rules for inferring local knot vectors to define C 2 -continuous cubic tensor product spline blending functions. Our conjecture is that these rules allow to obtain, for a given T-mesh, a set of linearly independent spline functions with the property that spaces spanned by nested T-meshes are also nested, and therefore, the functions can reproduce cubic polynomials. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0- balanced mesh...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data coming out from various researches carried out over the last years in Italy on the problem of school dispersion in secondary school show that difficulty in studying mathematics is one of the most frequent reasons of discomfort reported by students. Nevertheless, it is definitely unrealistic to think we can do without such knowledge in today society: mathematics is largely taught in secondary school and it is not confined within technical-scientific courses only. It is reasonable to say that, although students may choose academic courses that are, apparently, far away from mathematics, all students will have to come to terms, sooner or later in their life, with this subject. Among the reasons of discomfort given by the study of mathematics, some mention the very nature of this subject and in particular the complex symbolic language through which it is expressed. In fact, mathematics is a multimodal system composed by oral and written verbal texts, symbol expressions, such as formulae and equations, figures and graphs. For this, the study of mathematics represents a real challenge to those who suffer from dyslexia: this is a constitutional condition limiting people performances in relation to the activities of reading and writing and, in particular, to the study of mathematical contents. Here the difficulties in working with verbal and symbolic codes entail, in turn, difficulties in the comprehension of texts from which to deduce operations that, once combined together, would lead to the problem final solution. Information technologies may support this learning disorder effectively. However, these tools have some implementation limits, restricting their use in the study of scientific subjects. Vocal synthesis word processors are currently used to compensate difficulties in reading within the area of classical studies, but they are not used within the area of mathematics. This is because the vocal synthesis (or we should say the screen reader supporting it) is not able to interpret all that is not textual, such as symbols, images and graphs. The DISMATH software, which is the subject of this project, would allow dyslexic users to read technical-scientific documents with the help of a vocal synthesis, to understand the spatial structure of formulae and matrixes, to write documents with a technical-scientific content in a format that is compatible with main scientific editors. The system uses LaTex, a text mathematic language, as mediation system. It is set up as LaTex editor, whose graphic interface, in line with main commercial products, offers some additional specific functions with the capability to support the needs of users who are not able to manage verbal and symbolic codes on their own. LaTex is translated in real time into a standard symbolic language and it is read by vocal synthesis in natural language, in order to increase, through the bimodal representation, the ability to process information. The understanding of the mathematic formula through its reading is made possible by the deconstruction of the formula itself and its “tree” representation, so allowing to identify the logical elements composing it. Users, even without knowing LaTex language, are able to write whatever scientific document they need: in fact the symbolic elements are recalled by proper menus and automatically translated by the software managing the correct syntax. The final aim of the project, therefore, is to implement an editor enabling dyslexic people (but not only them) to manage mathematic formulae effectively, through the integration of different software tools, so allowing a better teacher/learner interaction too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit wird der Orientierungsglasübergang ungeordneter, molekularer Kristalle untersucht. Die theoretische Behandlung ist durch die Anisotropie der Einteilchen-Verteilungsfunktion und der Paarfunktionen erschwert. Nimmt man ein starres Gitter, wird der reziproke Raum im Gegenzug auf die 1. Brillouin-Zone eingeschränkt. Der Orientierungsglasübergang wird im Rahmen der Modenkopplungsgleichungen studiert, die dazu hergeleitet werden. Als Modell dienen harte Rotationsellipsoide auf einem starren sc Gitter. Zur Berechnung der statischen tensoriellen Strukturfaktoren wird die Ornstein-Zernike(OZ)-Gleichung molekularer Kristalle abgeleitet und selbstkonsistent zusammen mit der von molekularen Flüssigkeiten übernommenen Percus-Yevick(PY)-Näherung gelöst. Parallel dazu werden die Strukturfaktoren durch MC-Simulationen ermittelt. Die OZ-Gleichung molekularer Kristalle ähnelt der von Flüssigkeiten, direkte und totale Korrelationsfunktion kommen jedoch wegen des starren Gitters nur ohne Konstantanteile in den Winkelvariablen vor, im Gegensatz zur PY-Näherung. Die Anisotropie bringt außerdem einen nichttrivialen Zusatzfaktor. OZ/PY-Strukturfaktoren und MC-Ergebnisse stimmen gut überein. Bei den Matrixelementen der Dichte-Dichte-Korrelationsfunktion gibt es drei Hauptverläufe: oszillatorisch, monoton und unregelmäßig abfallend. Oszillationen gehören zu alternierenden Dichtefluktuationen, führen zu Maxima der Strukturfaktoren am Zonenrand und kommen bei oblaten und genügend breiten prolaten, schwächer auch bei dünnen, nicht zu langen prolaten Ellipsoiden vor. Der exponentielle monotone Abfall kommt bei allen Ellipsoiden vor und führt zu Maxima der Strukturfaktoren in der Zonenmitte, was die Tendenz zu nematischer Ordnung zeigt. Die OZ/PY-Theorie ist durch divergierende Maxima der Strukturfaktoren begrenzt. Bei den Modenkopplungsgleichungen molekularer Kristalle zeigt sich eine große Ähnlichkeit mit denen molekularer Flüssigkeiten, jedoch spielen auf einem starrem Gitter nur die Matrixelemente mit l,l' > 0 eine Rolle und es finden Umklapps von reziproken Vektoren statt. Die Anisotropie bringt auch hier nichtkonstante Zusatzfaktoren ins Spiel. Bis auf flache oblate Ellipsoide wird die Modenkopplungs-Glaslinie von der Divergenz der Strukturfaktoren bestimmt. Für sehr lange Ellipsoide müssen die Strukturfaktoren zur Divergenz hin extrapoliert werden. Daher treibt nicht der Orientierungskäfigeffekt den Glasübergang, sondern Fluktuationen an einer Phasengrenze. Nahe der Kugelform ist keine zuverlässige Glasline festlegbar. Die eingefrorenen kritischen Dichte-Dichte-Korrelatoren haben nur in wenigen Fällen die Oszillationen der statischen Korrelatoren. Der monotone Abfall bleibt dagegen für lange Zeiten meist erhalten. Folglich haben die kritischen Modenkopplungs-Nichtergodizitätsparameter abgeschwächte Maxima in der Zonenmitte, während die Maxima am Zonenrand meist verschwunden sind. Die normierten Nichtergodizitätsparameter zeigen eine Fülle von Verläufen, besonders tiefer im Glas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gegenstand dieser Arbeit ist die Untersuchung von Photokathoden mit negativer Elektronenaffinität (NEA) mittels zeitlich hochauflösender Vermessung der emittierten Ladungs- und Spinpolarisationsverteilungen nach Anregung mit einem ultrakurzen Laserpuls. Untersucht wurden uniaxial deformierte GaAsP-Photokathoden mit dünnen emittierenden Schichten (≤150nm), sowie undeformierte GaAs-Photokathoden mit unterschiedlichen Schichtdicken. Die Untersuchungen wurden an einer 100keV-Elektronenquelle durchgeführt, wie sie am Mainzer Mikrotron (MAMI) zur Erzeugung eines Spinpolarisierten Elektronenstrahls verwendet wird. Mit der Apparatur konnte eine Zeitauflösung von 2,5ps erreicht werden. Es zeigte sich, dass die tatsächliche Antwortzeit der Photokathoden die erreichte Zeitauflösung noch unterschreitet. Eine Depolarisation in den kurzen, wegen der Zeitauflösung auf 2,5ps begrenzten, Elektronenpulsen konnte aber nachgewiesen werden. Weiterhin wurde gezeigt, dass der Polarisationsverlust der emittierten Elektronen bei dünnen Schichten im Wesentlichen auf eine energiekorrelierte Depolarisation beim Durchqueren der Bandbiegungszone zurückzuführen ist. Als weiteres Resultat wird, für die GaAsP-Photokathoden mit einer Schichtdicke von 150nm, eine Obergrenze für die mittlere Emissionszeit von ≤1,25ps angegeben. Daraus ergibt sich nach dem hier verwendeten Diffusionsmodell eine Untergrenze für die Oberflächenrekombinationsgeschwindigkeit an der Bandbiegungszone von S≥1,2·10^7 cm/s.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant interest in nanotechnology, is stimulated by the fact that materials exhibit qualitative changes of properties when their dimensions approach ”finite-sizes”. Quantization of electronic, optical and acoustic energies at the nanoscale provides novel functions, with interests spanning from electronics and photonics to biology. The present dissertation involves the application of Brillouin light scattering (BLS) to quantify and utilize material displacementsrnfor probing phononics and elastic properties of structured systems with dimensions comparable to the wavelength of visible light. The interplay of wave propagation with materials exhibiting spatial inhomogeneities at sub-micron length scales provides information not only about elastic properties but also about structural organization at those length scales. In addition the vector nature of q allows, for addressing the directional dependence of thermomechanical properties. To meet this goal, one-dimensional confined nanostructures and a biological system possessing high hierarchical organization were investigated. These applications extend the capabilities of BLS from a characterization tool for thin films to a method for unravelingrnintriguing phononic properties in more complex systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Erprobung zweier neuer dynamischer Lichtstreumethoden: Die Resonanz verstärkte Lichtstreuung (REDLS: resonance enhanced dynamic light scattering) und die Wellenleiter verstärkte Lichtstreuung (WEDLS: waveguide enhanced dynamic light scattering). Beide Methoden verwenden eine Kombination aus evaneszenten Wellen und dynamischer Lichtstreuung: Bei der REDLS-Technik wird das evaneszentes Feld eines Oberflächenplasmons verwendet, bei der WEDLS-Technik handelt es sich um das evaneszente Feld von Metallfilm verstärkten Leckwellenleitermoden. Die neuen Methoden liefern Informationen über die Dynamik an Grenzflächen über ein breites Zeitfenster (einige Nanosekunden bis hin zu mehreren Sekunden) mit einer räumlichen Auflösung im sub-Mikrometerbereich. Sie erweitern somit das Gebiet der dynamischen Lichtstreuung in evaneszenter Geometrie, bei dem bislang nur die evanescent wave dynamic light scattering (EWDLS) - Technik zur Verfügung stand. Bei der EWDLS-Technik wird das evaneszente Feld der Totalreflexion als kohärenter Lichtstrahl für die dynamische Lichtstreuung verwendet. Ein Vergleich mit der EWDLS-Technik zeigt ein stark erhöhtes Signal/Rausch-Verhältnis bei den neu entwickelten Techniken aufgrund der resonanten Anregung. Zusätzlich ist es sowohl bei der REDLS- als auch bei der WEDLS-Technik möglich Grenzflächenmodifikationen und damit z.B. Adsorptionsprozesse zu detektieren. Der Einfluss einer Grenzfläche auf die Diffusion von PS-Latex-Partikeln wurde untersucht. Die Grenzfläche bestand im Fall der REDLS-Technik aus Gold, bei der WEDLS-Technik aus PMMA. Die Funktionsweise und die Gültigkeit der neu entwickelten Techniken wurde mit Hilfe von PS-Latex-Partikeln mit hydrodynamischen Radien von R =11nm bis hin zu R=204nm demonstriert.