904 resultados para detection method
Resumo:
Integrin transmembrane receptor functions are regulated by adaptor molecules binding to their alpha and beta subunit intracellular domains, or tails, thus affecting integrin traffic and adhesion during e.g. cell motility. Interestingly, many cellular proteins function in both cell motility and cell division, thus raising the possibility that integrins might be involved in regulating the cell cycle. A thorough understanding of cell division is essential in cell biology and in human malignancies. It is well established that failures to complete cell cycle can give rise to genetically unstable cells with tumorigenic properties. Transformed cells promote the disruption of intercellular adhesions such as tight junctions, and this correlates with the onset of cell motility, invasion and unfavorable prognosis in cancer. In this study, we analyzed integrin regulation, mediated by adaptor binding to the subunit tail, during cell motility and cell division. We revealed a novel molecular mechanism by which Rab21, through association with the integrin alpha subunits, drives integrin endosomal traffic during mitotic phases. In addition, we found indications for this finding in vivo, as RAB21 gene deletions were mapped in ovarian and prostate cancer samples. Importantly, the multinucleated phenotype of cultured ovarian cancer cells could be reverted by Rab21 overexpression. In this thesis work, we also show how the tight junction protein ZO-1 unexpectedly interacts with the 5 integrin cytoplasmic domain in the lamellipodia to promote cell motility and at the cleavage furrow to support separation of the daughter cells. The alpha5-ZO-1 complex formation was dependent on PKC which regulates ZO-1 phosphorylation and its subcellular localization. In addition, by an in situ detection method, we showed that a subset of metastatic human lung cancers expressed the alpha5beta-ZO-1 complex. Taken together, we were able to identify new molecular pathways that regulate integrin functions in an alpha tail-mediated fashion. These findings firmly suggest that genetic alterations in integrin traffic may lead to progression of tumorigenesis as a result of failed cell division. Also, the interplay of integrins and ZO-1 in forming spatially regulated adhesive structures broadens our view of crosstalk between pathways and distinct adhesive structures that can be involved in cancer cell biology.
Resumo:
The drug discovery process is facing new challenges in the evaluation process of the lead compounds as the number of new compounds synthesized is increasing. The potentiality of test compounds is most frequently assayed through the binding of the test compound to the target molecule or receptor, or measuring functional secondary effects caused by the test compound in the target model cells, tissues or organism. Modern homogeneous high-throughput-screening (HTS) assays for purified estrogen receptors (ER) utilize various luminescence based detection methods. Fluorescence polarization (FP) is a standard method for ER ligand binding assay. It was used to demonstrate the performance of two-photon excitation of fluorescence (TPFE) vs. the conventional one-photon excitation method. As result, the TPFE method showed improved dynamics and was found to be comparable with the conventional method. It also held potential for efficient miniaturization. Other luminescence based ER assays utilize energy transfer from a long-lifetime luminescent label e.g. lanthanide chelates (Eu, Tb) to a prompt luminescent label, the signal being read in a time-resolved mode. As an alternative to this method, a new single-label (Eu) time-resolved detection method was developed, based on the quenching of the label by a soluble quencher molecule when displaced from the receptor to the solution phase by an unlabeled competing ligand. The new method was paralleled with the standard FP method. It was shown to yield comparable results with the FP method and found to hold a significantly higher signal-tobackground ratio than FP. Cell-based functional assays for determining the extent of cell surface adhesion molecule (CAM) expression combined with microscopy analysis of the target molecules would provide improved information content, compared to an expression level assay alone. In this work, immune response was simulated by exposing endothelial cells to cytokine stimulation and the resulting increase in the level of adhesion molecule expression was analyzed on fixed cells by means of immunocytochemistry utilizing specific long-lifetime luminophore labeled antibodies against chosen adhesion molecules. Results showed that the method was capable of use in amulti-parametric assay for protein expression levels of several CAMs simultaneously, combined with analysis of the cellular localization of the chosen adhesion molecules through time-resolved luminescence microscopy inspection.
Resumo:
Campylobacter jejuniand C. colihave been associated with gastrointestinal disorders in human beings, due mainly to the consumption of chicken meat. Despite control measures for reducing contamination by these bacteria, the detection of Campylobacter in carcasses after chilling remains high.A total of 105 carcasses were assessed by the horizontal detection method in five federally inspected slaughterhouses in southern Brazil in 2012 and in the first three months of 2013. Campylobacterwas isolated in 37.1% of the carcasses, of which 97.5% contained C. jejuni and 2.5% were infected by C. coli. The rate of positive carcasses across the slaughterhouses ranged from 0 to 71.4%. Determining the occurrence of Campylobacteramong flocks is crucial for estimating the microbial load at specific points along the slaughtering process and for minimizing the risk of contamination of end products by Campylobacter.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
Conventional diagnostics tests and technologies typically allow only a single analysis and result per test. The aim of this study was to propose robust and multiplex array-inwell test platforms based on oligonucleotide and protein arrays combining the advantages of simple instrumentation and upconverting phosphor (UCP) reporter technology. The UCPs are luminescent lanthanide-doped crystals that have a unique capability to convert infrared radiation into visible light. No autofluorescence is produced from the sample under infrared excitation enabling the development of highly sensitive assays. In this study, an oligonucleotide array-in-well hybridization assay was developed for the detection and genotyping of human adenoviruses. The study provided a verification of the advantages and potential of the UCP-based reporter technology in multiplex assays as well as anti-Stokes photoluminescence detection with a new anti- Stokes photoluminescence imager. The developed assay was technically improved and used to detect and genotype adenovirus types from clinical specimens. Based on the results of the epidemiological study, an outbreak of adenovirus type B03 was observed in the autumn of 2010. A quantitative array-in-well immunoassay was developed for three target analytes (prostate specific antigen, thyroid stimulating hormone, and luteinizing hormone). In this study, quantitative results were obtained for each analyte and the analytical sensitivities in buffer were in clinically relevant range. Another protein-based array-inwell assay was developed for multiplex serodiagnostics. The developed assay was able to detect parvovirus B19 IgG and adenovirus IgG antibodies simultaneously from serum samples according to reference assays. The study demonstrated that the UCPtechnology is a robust detection method for diverse multiplex imaging-based array-inwell assays.
Resumo:
The JAK2/STAT3 signal pathway is an important component of survivor activating factor enhancement (SAFE) pathway. The objective of the present study was to determine whether the JAK2/STAT3 signaling pathway participates in hydrogen sulfide (H2S) postconditioning, protecting isolated rat hearts from ischemic-reperfusion injury. Male Sprague-Dawley rats (230-270 g) were divided into 6 groups (N = 14 per group): time-matched perfusion (Sham) group, ischemia/reperfusion (I/R) group, NaHS postconditioning group, NaHS with AG-490 group, AG-490 (5 µM) group, and dimethyl sulfoxide (DMSO; <0.2%) group. Langendorff-perfused rat hearts, with the exception of the Sham group, were subjected to 30 min of ischemia followed by 90 min of reperfusion after 20 min of equilibrium. Heart rate, left ventricular developed pressure (LVDP), left ventricular end-diastolic pressure (LVEDP), and the maximum rate of increase or decrease of left ventricular pressure (± dp/dt max) were recorded. Infarct size was determined using triphenyltetrazolium chloride (TTC) staining. Myocardial TUNEL staining was used as the in situ cell death detection method and the percentage of TUNEL-positive nuclei to all nuclei counted was used as the apoptotic index. The expression of STAT3, bcl-2 and bax was determined by Western blotting. After reperfusion, compared to the I/R group, H2S significantly improved functional recovery and decreased infarct size (23.3 ± 3.8 vs 41.2 ± 4.7%, P < 0.05) and apoptotic index (22.1 ± 3.6 vs 43.0 ± 4.8%, P < 0.05). However, H2S-mediated protection was abolished by AG-490, the JAK2 inhibitor. In conclusion, H2S postconditioning effectively protects isolated I/R rat hearts via activation of the JAK2/STAT3 signaling pathway.
Resumo:
Rare-earth based upconverting nanoparticles (UCNPs) have attracted much attention due to their unique luminescent properties. The ability to convert multiple photons of lower energy to ones with higher energy through an upconversion (UC) process offers a wide range of applications for UCNPs. The emission intensities and wavelengths of UCNPs are important performance characteristics, which determine the appropriate applications. However, insufficient intensities still limit the use of UCNPs; especially the efficient emission of blue and ultraviolet (UV) light via upconversion remains challenging, as these events require three or more near-infrared (NIR) photons. The aim of the study was to enhance the blue and UV upconversion emission intensities of Tm3+ doped NaYF4 nanoparticles and to demonstrate their utility in in vitro diagnostics. As the distance between the sensitizer and the activator significantly affect the energy transfer efficiency, different strategies were explored to change the local symmetry around the doped lanthanides. One important strategy is the intentional co-doping of active (participate in energy transfer) or passive (do not participate in energy transfer) impurities into the host matrix. The roles of doped passive impurities (K+ and Sc3+) in enhancing the blue and UV upconversions, as well as in influencing the intense UV upconversion emission through excess sensitization (active impurity) were studied. Additionally, the effects of both active and passive impurity doping on the morphological and optical performance of UCNPs were investigated. The applicability of UV emitting UCNPs as an internal light source for glucose sensing in a dry chemistry test strip was demonstrated. The measurements were in agreement with the traditional method based on reflectance measurements using an external UV light source. The use of UCNPs in the glucose test strip offers an alternative detection method with advantages such as control signals for minimizing errors and high penetration of the NIR excitation through the blood sample, which gives more freedom for designing the optical setup. In bioimaging, the excitation of the UCNPs in the transparent IR region of the tissue permits measurements, which are free of background fluorescence and have a high signal-to-background ratio. In addition, the narrow emission bandwidth of the UCNPs enables multiplexed detections. An array-in-well immunoassay was developed using two different UC emission colours. The differentiation between different viral infections and the classification of antibody responses were achieved based on both the position and colour of the signal. The study demonstrates the potential of spectral and spatial multiplexing in the imaging based array-in-well assays.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
Kvantitatiivinen reaaliaikainen polymeraasiketjureaktio (engl. polymerase chain reaction, PCR) on osoittautunut käyttäjäystävällisimmäksi menetelmäksi nukleiinihapposekvenssien kvantitoimisessa. Tätä menetelmää voidaan herkistää pienempien DNA-pitoisuuksien havaitsemiseen käyttämällä hyväksi aikaerotteista fluorometriaa (engl. time-resolved fluorometry, TRF) ja luminoivia lantanidileimoja, joiden fluoresenssin pitkän eliniän ansiosta emission mittaus voidaan suorittaa vasta hetki virittävän valopulssin jälkeen, jolloin lyhytikäinen taustasäteily ehtii sammua. Tuloksena saadaan korkea signaali-taustasuhde. Tämän diplomityön tarkoituksena oli rakentaa TRF:än pystyvä reaaliaikainen PCR-laite, sillä tällaista laitetta ei ole markkinoilla tarjolla. Laite rakennettiin kehittämällä lämpökierrätin ja yhdistämällä se valmiiseen TRF:än kykenevään mittapäähän. Mittapään ja lämpökierrättimen hallitsemiseksi kehitettiin myös tietokoneohjelma. Valon tuottamiseksi ja mittaamiseksi haluttiin käyttää edullisia komponentteja, joten työssä käytettiin valmiin mittapään optiikkaa, jossa viritys tapahtuu hohtodiodilla (engl. light-emitting diode, LED) ja lantanidileiman emission mittaus fotodiodilla (engl. photodiode, PD) tai valomonistinputkella (engl. photomultiplier tube, PMT). Myös mittapään suorituskykyä tutkittiin. Työtä varten kehitettiin lämpökierrätin, joka koostui Peltier-elementillä lämmitettävästä PCR-putkitelineestä ja lämpökannesta. Mittalaitteen suorituskyvyn tutkimiseen käytettiin kelaattikomplementaatioon perustuvaa PCR-tuotteen havaitsemismenetelmää. Kelaattikomplementaatio perustuu kahteen erilliseen oligonukleotidimolekyyliin, joista toiseen on sidottu lantanidi-ioni ja toiseen valoa absorboiva ligandirakenne, jotka yhdessä muodostavat fluoresoivan kokonaisuuden. Kehitetyn lämpökierrättimen todettiin olevan tarpeeksi tarkka sekä tehokas ja sen lämmitys- ja jäähdytysnopeuden maksimeiksi saatiin 2,6 °C/sekunti. Detektorina käytetyn PD:n ei todettu olevan tarpeeksi herkkä emission havainnoimiseksi ja se korvattiin laitteessa PMT:llä. Käytetyllä PCR-määrityksellä kynnyssykleiksi (engl. threshold cycle, Ct) sekä kehitetylle että referenssilaitteelle saatiin 28,4 käyttämällä samaa 100 000 kopion DNA:n aloitusmäärää. Työssä osoitettiin, että on mahdollista kehittää edullisia komponentteja käyttävä, TRF:än pystyvä, reaaliaikainen PCR-laite, joka kykenee vastaavaan Ct-arvoon kuin vertailulaite. PD:n herkkyys ei kuitenkaan riittänyt. Tulokset olivat lupaavia, sillä LED- ja PD-teknologiat kehittyvät ja markkinoille on tullut myös muita komponentteja, joiden avulla on tulevaisuudessa mahdollista kehittää vielä herkempi laite.
Resumo:
This thesis investigates how macroeconomic news announcements affect jumps and cojumps in foreign exchange markets, especially under different business cycles. We use 5-min interval from high frequency data on Euro/Dollar, Pound/Dollar and Yen/Dollar from Nov. 1, 2004 to Feb. 28, 2015. The jump detection method was proposed by Andersen et al. (2007c), Lee & Mykland (2008) and then modified by Boudt et al. (2011a) for robustness. Then we apply the two-regime smooth transition regression model of Teräsvirta (1994) to explore news effects under different business cycles. We find that scheduled news related to employment, real activity, forward expectations, monetary policy, current account, price and consumption influences forex jumps, but only FOMC Rate Decisions has consistent effects on cojumps. Speeches given by major central bank officials near a crisis also significantly affect jumps and cojumps. However, the impacts of some macroeconomic news are not the same under different economic states.
Resumo:
De nos jours les cartes d’utilisation/occupation du sol (USOS) à une échelle régionale sont habituellement générées à partir d’images satellitales de résolution modérée (entre 10 m et 30 m). Le National Land Cover Database aux États-Unis et le programme CORINE (Coordination of information on the environment) Land Cover en Europe, tous deux fondés sur les images LANDSAT, en sont des exemples représentatifs. Cependant ces cartes deviennent rapidement obsolètes, spécialement en environnement dynamique comme les megacités et les territoires métropolitains. Pour nombre d’applications, une mise à jour de ces cartes sur une base annuelle est requise. Depuis 2007, le USGS donne accès gratuitement à des images LANDSAT ortho-rectifiées. Des images archivées (depuis 1984) et des images acquises récemment sont disponibles. Sans aucun doute, une telle disponibilité d’images stimulera la recherche sur des méthodes et techniques rapides et efficaces pour un monitoring continue des changements des USOS à partir d’images à résolution moyenne. Cette recherche visait à évaluer le potentiel de telles images satellitales de résolution moyenne pour obtenir de l’information sur les changements des USOS à une échelle régionale dans le cas de la Communauté Métropolitaine de Montréal (CMM), une métropole nord-américaine typique. Les études précédentes ont démontré que les résultats de détection automatique des changements dépendent de plusieurs facteurs tels : 1) les caractéristiques des images (résolution spatiale, bandes spectrales, etc.); 2) la méthode même utilisée pour la détection automatique des changements; et 3) la complexité du milieu étudié. Dans le cas du milieu étudié, à l’exception du centre-ville et des artères commerciales, les utilisations du sol (industriel, commercial, résidentiel, etc.) sont bien délimitées. Ainsi cette étude s’est concentrée aux autres facteurs pouvant affecter les résultats, nommément, les caractéristiques des images et les méthodes de détection des changements. Nous avons utilisé des images TM/ETM+ de LANDSAT à 30 m de résolution spatiale et avec six bandes spectrales ainsi que des images VNIR-ASTER à 15 m de résolution spatiale et avec trois bandes spectrales afin d’évaluer l’impact des caractéristiques des images sur les résultats de détection des changements. En ce qui a trait à la méthode de détection des changements, nous avons décidé de comparer deux types de techniques automatiques : (1) techniques fournissant des informations principalement sur la localisation des changements et (2)techniques fournissant des informations à la fois sur la localisation des changements et sur les types de changement (classes « de-à »). Les principales conclusions de cette recherche sont les suivantes : Les techniques de détection de changement telles les différences d’image ou l’analyse des vecteurs de changements appliqués aux images multi-temporelles LANDSAT fournissent une image exacte des lieux où un changement est survenu d’une façon rapide et efficace. Elles peuvent donc être intégrées dans un système de monitoring continu à des fins d’évaluation rapide du volume des changements. Les cartes des changements peuvent aussi servir de guide pour l’acquisition d’images de haute résolution spatiale si l’identification détaillée du type de changement est nécessaire. Les techniques de détection de changement telles l’analyse en composantes principales et la comparaison post-classification appliquées aux images multi-temporelles LANDSAT fournissent une image relativement exacte de classes “de-à” mais à un niveau thématique très général (par exemple, bâti à espace vert et vice-versa, boisés à sol nu et vice-versa, etc.). Les images ASTER-VNIR avec une meilleure résolution spatiale mais avec moins de bandes spectrales que LANDSAT n’offrent pas un niveau thématique plus détaillé (par exemple, boisés à espace commercial ou industriel). Les résultats indiquent que la recherche future sur la détection des changements en milieu urbain devrait se concentrer aux changements du couvert végétal puisque les images à résolution moyenne sont très sensibles aux changements de ce type de couvert. Les cartes indiquant la localisation et le type des changements du couvert végétal sont en soi très utiles pour des applications comme le monitoring environnemental ou l’hydrologie urbaine. Elles peuvent aussi servir comme des indicateurs des changements de l’utilisation du sol. De techniques telles l’analyse des vecteurs de changement ou les indices de végétation son employées à cette fin.
Resumo:
The present investigation was envisaged to determine the prevalence and identify the different Salmonella serovar in seafood from Cochin area. Though, the distribution of Salmonella serovars in different seafood samples of Cochin has been well documented, the present attempt was made to identify the different Salmonella serovars and determine its prevalence in various seafoods. First pan of this investigation involved the isolation and identification of Salmonella strains with the help of different conventional culture methods. The identified isolates were used for the further investigation i.e. serotyping, this provides the information about the prevalent serovars in seafood. The prevalent Salmonella strains have been further characterized based on the utilization of different sugars and amino acids, to identify the different biovar of a serovar.A major research gap was observed in molecular characterization of Salmonella in seafood. Though, previous investigations reported the large number of Salmonella serovars from food sources in India, yet, very few work has been reported regarding genetic characterization of Salmonella serovars associated with food. Second part of this thesis deals with different molecular fingerprint profiles of the Salmonella serovars from seafood. Various molecular typing methods such as plasmid profiling, characterization of virulence genes, PFGE, PCR- ribotyping, and ERIC—PCR have been used for the genetic characterization of Salmonella serovars.The conventional culture methods are mainly used for the identification of Salmonella in seafood and most of the investigations from India and abroad showed the usage of culture method for detection of Salmonella in seafood. Hence, development of indigenous, rapid molecular method is most desirable for screening of Salmonella in large number of seafood samples at a shorter time period. Final part of this study attempted to develop alternative, rapid molecular detection method for the detection of Salmonella in seafood. Rapid eight—hour PCR assay has been developed for detection of Salmonella in seafood. The performance of three different methods viz., culture, ELISA and PCR assays were evaluated for detection of Salmonella in seafood and the results were statistically analyzed. Presence of Salmonella cells in food and enviromnental has been reported low in number, hence, more sensitive method for enumeration of Salmonella in food sample need to be developed. A quantitative realtime PCR has been developed for detection of Salmonella in seafood. This method would be useful for quantitative detection of Salmonella in seafood.
Resumo:
Among the large number of photothcrmal techniques available, photoacoustics assumes a very significant place because of its essential simplicity and the variety of applications it finds in science and technology. The photoacoustic (PA) effect is the generation of an acoustic signal when a sample, kept inside an enclosed volume, is irradiated by an intensity modulated beam of radiation. The radiation absorbed by the sample is converted into thermal waves by nonradiative de-excitation processes. The propagating thermal waves cause a corresponding expansion and contraction of the gas medium surrounding the sample, which in tum can be detected as sound waves by a sensitive microphone. These sound waves have the same frequency as the initial modulation frequency of light. Lock-in detection method enables one to have a sufficiently high signal to noise ratio for the detected signal. The PA signal amplitude depends on the optical absorption coefficient of the sample and its thermal properties. The PA signal phase is a function of the thermal diffusivity of the sample.Measurement of the PA amplitude and phase enables one to get valuable information about the thermal and optical properties of the sample. Since the PA signal depends on the optical and thennal properties of the sample, their variation will get reflected in the PA signal. Therefore, if the PA signal is collected from various points on a sample surface it will give a profile of the variations in the optical/thennal properties across the sample surface. Since the optical and thermal properties are affected by the presence of defects, interfaces, change of material etc. these will get reflected in the PA signal. By varying the modulation frequency, we can get information about the subsurface features also. This is the basic principle of PA imaging or PA depth profiling. It is a quickly expanding field with potential applications in thin film technology, chemical engineering, biology, medical diagnosis etc. Since it is a non-destructive method, PA imaging has added advantages over some of the other imaging techniques. A major part of the work presented in this thesis is concemed with the development of a PA imaging setup that can be used to detect the presence of surface and subsmface defects in solid samples.Determination of thermal transport properties such as thermal diffusivity, effusivity, conductivity and heat capacity of materials is another application of photothennal effect. There are various methods, depending on the nature of the sample, to determine these properties. However, there are only a few methods developed to determine all these properties simultaneously. Even though a few techniques to determine the above thermal properties individually for a coating can be found in literature, no technique is available for the simultaneous measurement of these parameters for a coating. We have developed a scanning photoacoustic technique that can be used to determine all the above thermal transport properties simultaneously in the case of opaque coatings such as paints. Another work that we have presented in this thesis is the determination of thermal effusivity of many bulk solids by a scanning photoacoustic technique. This is one of the very few methods developed to determine thermal effiisivity directly.
Resumo:
Summary: Productivity and forage quality of legume-grass swards are important factors for successful arable farming in both organic and conventional farming systems. For these objectives the botanical composition of the swards is of particular importance, especially, the content of legumes due to their ability to fix airborne nitrogen. As it can vary considerably within a field, a non-destructive detection method while doing other tasks would facilitate a more targeted sward management and could predict the nitrogen supply of the soil for the subsequent crop. This study was undertaken to explore the potential of digital image analysis (DIA) for a non destructive prediction of legume dry matter (DM) contribution of legume-grass mixtures. For this purpose an experiment was conducted in a greenhouse, comprising a sample size of 64 experimental swards such as pure swards of red clover (Trifolium pratense L.), white clover (Trifolium repens L.) and lucerne (Medicago sativa L.) as well as binary mixtures of each legume with perennial ryegrass (Lolium perenne L.). Growth stages ranged from tillering to heading and the proportion of legumes from 0 to 80 %. Based on digital sward images three steps were considered in order to estimate the legume contribution (% of DM): i) The development of a digital image analysis (DIA) procedure in order to estimate legume coverage (% of area). ii) The description of the relationship between legume coverage (% area) and legume contribution (% of DM) derived from digital analysis of legume coverage related to the green area in a digital image. iii) The estimation of the legume DM contribution with the findings of i) and ii). i) In order to evaluate the most suitable approach for the estimation of legume coverage by means of DIA different tools were tested. Morphological operators such as erode and dilate support the differentiation of objects of different shape by shrinking and dilating objects (Soille, 1999). When applied to digital images of legume-grass mixtures thin grass leaves were removed whereas rounder clover leaves were left. After this process legume leaves were identified by threshold segmentation. The segmentation of greyscale images turned out to be not applicable since the segmentation between legumes and bare soil failed. The advanced procedure comprising morphological operators and HSL colour information could determine bare soil areas in young and open swards very accurately. Also legume specific HSL thresholds allowed for precise estimations of legume coverage across a wide range from 11.8 - 72.4 %. Based on this legume specific DIA procedure estimated legume coverage showed good correlations with the measured values across the whole range of sward ages (R2 0.96, SE 4.7 %). A wide range of form parameters (i.e. size, breadth, rectangularity, and circularity of areas) was tested across all sward types, but none did improve prediction accuracy of legume coverage significantly. ii) Using measured reference data of legume coverage and contribution, in a first approach a common relationship based on all three legumes and sward ages of 35, 49 and 63 days was found with R2 0.90. This relationship was improved by a legume-specific approach of only 49- and 63-d old swards (R2 0.94, 0.96 and 0.97 for red clover, white clover, and lucerne, respectively) since differing structural attributes of the legume species influence the relationship between these two parameters. In a second approach biomass was included in the model in order to allow for different structures of swards of different ages. Hence, a model was developed, providing a close look on the relationship between legume coverage in binary legume-ryegrass communities and the legume contribution: At the same level of legume coverage, legume contribution decreased with increased total biomass. This phenomenon may be caused by more non-leguminous biomass covered by legume leaves at high levels of total biomass. Additionally, values of legume contribution and coverage were transformed to the logit-scale in order to avoid problems with heteroscedasticity and negative predictions. The resulting relationships between the measured legume contribution and the calculated legume contribution indicated a high model accuracy for all legume species (R2 0.93, 0.97, 0.98 with SE 4.81, 3.22, 3.07 % of DM for red clover, white clover, and lucerne swards, respectively). The validation of the model by using digital images collected over field grown swards with biomass ranges considering the scope of the model shows, that the model is able to predict legume contribution for most common legume-grass swards (Frame, 1992; Ledgard and Steele, 1992; Loges, 1998). iii) An advanced procedure for the determination of legume DM contribution by DIA is suggested, which comprises the inclusion of morphological operators and HSL colour information in the analysis of images and which applies an advanced function to predict legume DM contribution from legume coverage by considering total sward biomass. Low residuals between measured and calculated values of legume dry matter contribution were found for the separate legume species (R2 0.90, 0.94, 0.93 with SE 5.89, 4.31, 5.52 % of DM for red clover, white clover, and lucerne swards, respectively). The introduced DIA procedure provides a rapid and precise estimation of legume DM contribution for different legume species across a wide range of sward ages. Further research is needed in order to adapt the procedure to field scale, dealing with differing light effects and potentially higher swards. The integration of total biomass into the model for determining legume contribution does not necessarily reduce its applicability in practice as a combined estimation of total biomass and legume coverage by field spectroscopy (Biewer et al. 2009) and DIA, respectively, may allow for an accurate prediction of the legume contribution in legume-grass mixtures.
Resumo:
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras