909 resultados para GLOBAL ANALYSIS
Resumo:
Abstract Background Recent reviews have indicated that low level level laser therapy (LLLT) is ineffective in lateral elbow tendinopathy (LET) without assessing validity of treatment procedures and doses or the influence of prior steroid injections. Methods Systematic review with meta-analysis, with primary outcome measures of pain relief and/or global improvement and subgroup analyses of methodological quality, wavelengths and treatment procedures. Results 18 randomised placebo-controlled trials (RCTs) were identified with 13 RCTs (730 patients) meeting the criteria for meta-analysis. 12 RCTs satisfied half or more of the methodological criteria. Publication bias was detected by Egger's graphical test, which showed a negative direction of bias. Ten of the trials included patients with poor prognosis caused by failed steroid injections or other treatment failures, or long symptom duration or severe baseline pain. The weighted mean difference (WMD) for pain relief was 10.2 mm [95% CI: 3.0 to 17.5] and the RR for global improvement was 1.36 [1.16 to 1.60]. Trials which targeted acupuncture points reported negative results, as did trials with wavelengths 820, 830 and 1064 nm. In a subgroup of five trials with 904 nm lasers and one trial with 632 nm wavelength where the lateral elbow tendon insertions were directly irradiated, WMD for pain relief was 17.2 mm [95% CI: 8.5 to 25.9] and 14.0 mm [95% CI: 7.4 to 20.6] respectively, while RR for global pain improvement was only reported for 904 nm at 1.53 [95% CI: 1.28 to 1.83]. LLLT doses in this subgroup ranged between 0.5 and 7.2 Joules. Secondary outcome measures of painfree grip strength, pain pressure threshold, sick leave and follow-up data from 3 to 8 weeks after the end of treatment, showed consistently significant results in favour of the same LLLT subgroup (p < 0.02). No serious side-effects were reported. Conclusion LLLT administered with optimal doses of 904 nm and possibly 632 nm wavelengths directly to the lateral elbow tendon insertions, seem to offer short-term pain relief and less disability in LET, both alone and in conjunction with an exercise regimen. This finding contradicts the conclusions of previous reviews which failed to assess treatment procedures, wavelengths and optimal doses.
Resumo:
Abstract Background Oral squamous cell carcinoma (OSCC) is a frequent neoplasm, which is usually aggressive and has unpredictable biological behavior and unfavorable prognosis. The comprehension of the molecular basis of this variability should lead to the development of targeted therapies as well as to improvements in specificity and sensitivity of diagnosis. Results Samples of primary OSCCs and their corresponding surgical margins were obtained from male patients during surgery and their gene expression profiles were screened using whole-genome microarray technology. Hierarchical clustering and Principal Components Analysis were used for data visualization and One-way Analysis of Variance was used to identify differentially expressed genes. Samples clustered mostly according to disease subsite, suggesting molecular heterogeneity within tumor stages. In order to corroborate our results, two publicly available datasets of microarray experiments were assessed. We found significant molecular differences between OSCC anatomic subsites concerning groups of genes presently or potentially important for drug development, including mRNA processing, cytoskeleton organization and biogenesis, metabolic process, cell cycle and apoptosis. Conclusion Our results corroborate literature data on molecular heterogeneity of OSCCs. Differences between disease subsites and among samples belonging to the same TNM class highlight the importance of gene expression-based classification and challenge the development of targeted therapies.
Resumo:
Too Big to Ignore (TBTI; www.toobigtoignore.net) is a research network and knowledge mobilization partnership established to elevate the profile of small-scale fisheries (SSF), to argue against their marginalization in national and international policies, and to develop research and governance capacity to address global fisheries challenges. Network participants and partners are conducting global and comparative analyses, as well as in-depth studies of SSF in the context of local complexity and dynamics, along with a thorough examination of governance challenges, to encourage careful consideration of this sector in local, regional and global policy arenas. Comprising 15 partners and 62 researchers from 27 countries, TBTI conducts activities in five regions of the world. In Latin America and the Caribbean (LAC) region, we are taking a participative approach to investigate and promote stewardship and self-governance in SSF, seeking best practices and success stories that could be replicated elsewhere. As well, the region will focus to promote sustainable livelihoods of coastal communities. Key activities include workshops and stakeholder meetings, facilitation of policy dialogue and networking, as well as assessing local capacity needs and training. Currently, LAC members are putting together publications that examine key issues concerning SSF in the region and best practices, with a first focus on ecosystem stewardship. Other planned deliverables include comparative analysis, a regional profile on the top research issues on SSF, and a synthesis of SSF knowledge in LAC
Resumo:
Rickettsia rickettsii is an obligate intracellular tick-borne bacterium that causes Rocky Mountain Spotted Fever (RMSF), the most lethal spotted fever rickettsiosis. When an infected starving tick begins blood feeding from a vertebrate host, R. rickettsii is exposed to a temperature elevation and to components in the blood meal. These two environmental stimuli have been previously associated with the reactivation of rickettsial virulence in ticks, but the factors responsible for this phenotype conversion have not been completely elucidated. Using customized oligonucleotide microarrays and high-throughput microfluidic qRT-PCR, we analyzed the effects of a 10 degrees C temperature elevation and of a blood meal on the transcriptional profile of R. rickettsii infecting the tick Amblyomma aureolatum. This is the first study of the transcriptome of a bacterium in the genus Rickettsia infecting a natural tick vector. Although both stimuli significantly increased bacterial load, blood feeding had a greater effect, modulating five-fold more genes than the temperature upshift. Certain components of the Type IV Secretion System (T4SS) were up-regulated by blood feeding. This suggests that this important bacterial transport system may be utilized to secrete effectors during the tick vector's blood meal. Blood feeding also up-regulated the expression of antioxidant enzymes, which might correspond to an attempt by R. rickettsii to protect itself against the deleterious effects of free radicals produced by fed ticks. The modulated genes identified in this study, including those encoding hypothetical proteins, require further functional analysis and may have potential as future targets for vaccine development.
Resumo:
The reduction of friction and wear in systems presenting metal-to-metal contacts, as in several mechanical components, represents a traditional challenge in tribology. In this context, this work presents a computational study based on the linear Archard's wear law and finite element modeling (FEM), in order to analyze unlubricated sliding wear observed in typical pin on disc tests. Such modeling was developed using finite element software Abaqus® with 3-D deformable geometries and elastic–plastic material behavior for the contact surfaces. Archard's wear model was implemented into a FORTRAN user subroutine (UMESHMOTION) in order to describe sliding wear. Modeling of debris and oxide formation mechanisms was taken into account by the use of a global wear coefficient obtained from experimental measurements. Such implementation considers an incremental computation for surface wear based on the nodal displacements by means of adaptive mesh tools that rearrange local nodal positions. In this way, the worn track was obtained and new surface profile is integrated for mass loss assessments. This work also presents experimental pin on disc tests with AISI 4140 pins on rotating AISI H13 discs with normal loads of 10, 35, 70 and 140 N, which represent, respectively, mild, transition and severe wear regimes, at sliding speed of 0.1 m/s. Numerical and experimental results were compared in terms of wear rate and friction coefficient. Furthermore, in the numerical simulation the stress field distribution and changes in the surface profile across the worn track of the disc were analyzed. The applied numerical formulation has shown to be more appropriate to predict mild wear regime than severe regime, especially due to the shorter running-in period observed in lower loads that characterizes this kind of regime.
Resumo:
La difusividad diapicna en el océano es uno de los parámetros más desconocidos en los modelos climáticos actuales. Su importancia radica en que es uno de los principales factores de transporte de calor hacia capas más profundas del océano. Las medidas de esta difusividad son variables e insuficientes para confeccionar un mapa global con estos valores. A través de una amplia revisión bibliográfica hasta el año 2009 del tema se encontró que el sistema climático es extremadamente sensible a la difusividad diapicna, donde el escalado del Océano Pacífico Sur, con una potencia de su coeficiente de difusividad o kv de 0.63, resultó ser más sensible a los cambios en el coeficiente de difusividad diapicna que el Océano Atlántico con una potencia de kv de 0.44 , se pone de manifiesto así la necesidad de esclarecer los esquemas de mezcla, esquemas de clausura y sus parametrizaciones a través de Modelos de Circulación Global (GCMs) y Modelos de Complejidad Intermedia del Sistema Terrestre (EMICs), dentro del marco de un posible cambio climático y un calentamiento global debido al aumento de las emisiones de gases de efecto invernadero. Así, el objetivo principal de este trabajo es comprender la sensibilidad del sistema climático a la difusividad diapicna en el océano a través de los GCMs y los EMICs. Para esto es necesario el análisis de los posibles esquemas de mezcla diapicna con el objetivo final de encontrar el modelo óptimo que permita predecir la evolución del sistema climático, el estudio de todas las variables que influyen en el mismo, y la correcta simulación en largos periodos de tiempo. The diapycnal diffusivity in the ocean is one of the least known parameters in current climate models. Measurements of this diffusivity are sparse and insufficient for compiling a global map. Through a lengthy review of the literature through 2009 found that the climate system is extremely sensitive to the diapycnal diffusivity, where in the South Pacific scales with the 0.63 power of the diapycnal diffusion, in contrasts to the scales with the 0.44 power of the diapycnal diffusion of North Atlantic. Therefore, the South Pacific is more sensitive than the North Atlantic. All this evidenced the need to clarify the schemes of mixing and its parameterisations through Global Circulation Models (GCMs) and Earth Models of Intermediate Complexity (EMICs) within a context of possible climate change and global warming due to increased of emissions of greenhouse gases. Thus, the main objective of this work understands the sensitivity of the climate system to diapycnal diffusivity in the ocean through the GCMs and EMICs. This requires the analysis of possible schemes of diapycnal mixing with the ultimate goal of finding the optimal model to predict the evolution of the climate system, the study of all variables that affect it and the correct simulation over long periods of time.
Resumo:
[EN] Research background and hypothesis. Several attempts have been made to understand some modalities of sport from the point of view of complexity. Most of these studies deal with this phenomenon with regard to the mechanics of the game itself (in isolation). Nevertheless, some research has been conducted from the perspective of competition between teams. Our hypothesis was that for the study of competitiveness levels in the system of league competition our analysis model (Shannon entropy), is a useful and highly sensitive tool to determine the degree of global competitiveness of a league. Research aim. The aim of our study was to develop a model for the analysis of competitiveness level in team sport competitions based on the uncertainty level that might exist for each confrontation. Research methods. Degree of uncertainty or randomness of the competition was analyzed as a factor of competitiveness. It was calculated on the basis of the Shannon entropy. Research results. We studied 17 NBA regular seasons, which showed a fairly steady entropic tendency. There were seasons less competitive (? 0.9800) than the overall average (0.9835), and periods where the competitiveness remained at higher levels (range: 0.9851 to 0.9902). Discussion and conclusions. A league is more competitive when it is more random. Thus, it is harder to predict the fi nal outcome. However, when the competition is less random, the degree of competitiveness will decrease signifi cantly. The NBA is a very competitive league, there is a high degree of uncertainty of knowing the fi nal result.
Resumo:
Programa de oceanografía
Resumo:
The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.
Resumo:
This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.
Resumo:
The intensity of regional specialization in specific activities, and conversely, the level of industrial concentration in specific locations, has been used as a complementary evidence for the existence and significance of externalities. Additionally, economists have mainly focused the debate on disentangling the sources of specialization and concentration processes according to three vectors: natural advantages, internal, and external scale economies. The arbitrariness of partitions plays a key role in capturing these effects, while the selection of the partition would have to reflect the actual characteristics of the economy. Thus, the identification of spatial boundaries to measure specialization becomes critical, since most likely the model will be adapted to different scales of distance, and be influenced by different types of externalities or economies of agglomeration, which are based on the mechanisms of interaction with particular requirements of spatial proximity. This work is based on the analysis of the spatial aspect of economic specialization supported by the manufacturing industry case. The main objective is to propose, for discrete and continuous space: i) a measure of global specialization; ii) a local disaggregation of the global measure; and iii) a spatial clustering method for the identification of specialized agglomerations.
Resumo:
The velocity and mixing field of two turbulent jets configurations have been experimentally characterized by means of cold- and hot-wire anemometry in order to investigate the effects of the initial conditions on the flow development. In particular, experiments have been focused on the effect of the separation wall between the two streams on the flow field. The results of the experiments have pointed out that the wake behind a thick wall separating wall has a strong influence on the flow field evolution. For instance, for nearly unitary velocity ratios, a clear vortex shedding from the wall is observable. This phenomenon enhances the mixing between the inner and outer shear layer. This enhancement in the fluctuating activity is a consequence of a local absolute instability of the flow which, for a small range of velocity ratios, behaves as an hydrodynamic oscillator with no sensibility to external perturbations. It has been suggested indeed that this absolute instability can be used as a passive method to control the flow evolution. Finally, acoustic excitation has been applied to the near field in order to verify whether or not the observed vortex shedding behind the separating wall is due to a global oscillating mode as predicted by the theory. A new scaling relationship has been also proposed to determine the preferred frequency for nearly unitary velocity ratios. The proposed law takes into account both the Reynolds number and the velocity ratio dependence of this frequency and, therefore, improves all the previously proposed relationships.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
L’argomento scelto riguarda l’adozione di standard privati da parte di imprese agro-alimentari e le loro conseguenze sulla gestione globale dell’azienda. In particolare, lo scopo di questo lavoro è quello di valutare le implicazioni dovute all’adozione del BRC Global Standard for Food Safety da parte delle imprese agro-alimentari italiane. La valutazione di tale impatto è basata sulle percezioni dei responsabili aziendali in merito ad aspetti economici, gestionali, commerciali, qualitativi, organizzativi. La ricerca ha seguito due passaggi fondamentali: innanzitutto sono state condotte 7 interviste in profondità con i Responsabili Qualità (RQ) di aziende agro-alimentari italiane certificate BRC Food. Le variabili estrapolate dall’analisi qualitativa del contenuto delle interviste sono state inserite, insieme a quelle rilevate in letteratura, nel questionario creato per la successiva survey. Il questionario è stato inviato tramite e-mail e con supporto telefonico ad un campione di aziende selezionato tramite campionamento random. Dopo un periodo di rilevazione prestabilito, sono stati compilati 192 questionari. L’analisi descrittiva dei dati mostra che i RQ sono in buona parte d’accordo con le affermazioni riguardanti gli elementi d’impatto. Le affermazioni maggiormente condivise riguardano: efficienza del sistema HACCP, efficienza del sistema di rintracciabilità, procedure di controllo, formazione del personale, miglior gestione delle urgenze e non conformità, miglior implementazione e comprensione di altri sistemi di gestione certificati. Attraverso l’analisi ANOVA fra variabili qualitative e quantitative e relativo test F emerge che alcune caratteristiche delle aziende, come l’area geografica, la dimensione aziendale, la categoria di appartenenza e il tipo di situazione nei confronti della ISO 9001 possono influenzare differentemente le opinioni degli intervistati. Successivamente attraverso un’analisi fattoriale sono stati estratti 8 fattori partendo da un numero iniziale di 28 variabili. Sulla base dei fattori è stata applicata la cluster analysis di tipo gerarchico che ha portato alla segmentazione del campione in 5 gruppi diversi. Ogni gruppo è stato interpretato sulla base di un profilo determinato dal posizionamento nei confronti dei vari fattori. I risultati oltre ad essere stati validati attraverso focus group effettuati con ricercatori ed operatori del settore, sono stati supportati anche da una successiva indagine qualitativa condotta presso 4 grandi retailer inglesi. Lo scopo di questa successiva indagine è stato quello di valutare l’esistenza di opinioni divergenti nei confronti dei fornitori che andasse quindi a sostenere l’ipotesi di un problema di asimmetria informativa che nonostante la presenza di standard privati ancora sussiste nelle principali relazioni contrattuali. Ulteriori percorsi di ricerca potrebbero stimare se la valutazione dell’impatto del BRC può aiutare le aziende di trasformazione nell’implementazione di altri standard di qualità e valutare quali variabili possono influenzare invece le percezioni in termini di costi dell’adozione dello standard.
Resumo:
Sequenz spezifische biomolekulare Analyseverfahren erweisen sich gerade im Hinblick auf das Humane Genom Projekt als äußerst nützlich in der Detektion von einzelnen Nukleotid Polymorphismen (SNPs) und zur Identifizierung von Genen. Auf Grund der hohen Anzahl von Basenpaaren, die zu analysieren sind, werden sensitive und effiziente Rastermethoden benötigt, welche dazu fähig sind, DNA-Proben in einer geeigneten Art und Weise zu bearbeiten. Die meisten Detektionsarten berücksichtigen die Interaktion einer verankerten Probe und des korrespondierenden Targets mit den Oberflächen. Die Analyse des kinetischen Verhaltens der Oligonukleotide auf der Sensoroberfläche ist infolgedessen von höchster Wichtigkeit für die Verbesserung bereits bekannter Detektions - Schemata. In letzter Zeit wurde die Oberflächen Plasmonen feld-verstärkte Fluoreszenz Spektroskopie (SPFS) entwickelt. Sie stellt eine kinetische Analyse - und Detektions - Methode dar, die mit doppelter Aufzeichnung, d.h. der Änderung der Reflektivität und des Fluoreszenzsignals, für das Interphasen Phänomen operiert. Durch die Verwendung von SPFS können Kinetikmessungen für die Hybridisierung zwischen Peptid Nukleinsäure (PNA), welche eine synthetisierte Nukleinsäure DNA imitiert und eine stabilere Doppelhelix formt, und DNA auf der Sensoroberfläche ausgeführt werden. Mittels einzel-, umfassend-, und titrations- Experimenten sowohl mit einer komplementär zusammenpassenden Sequenz als auch einer mismatch Sequenz können basierend auf dem Langmuir Modell die Geschwindigkeitskonstanten für die Bindungsreaktion des oligomer DNA Targets bzw. des PCR Targets zur PNA ermittelt werden. Darüber hinaus wurden die Einflüsse der Ionenstärke und der Temperatur für die PNA/DNA Hybridisierung in einer kinetischen Analyse aufgezeigt.