980 resultados para Mapping time
Resumo:
rejection can lead to loss of function. Histological reading of endomyocardial biopsy remains the "gold standard" for guiding immunosuppression, despite its methodological limitations (sampling error and interobserver variability). The measurement of the T2 relaxation time has been suggested for detection of allograft rejection, on the pathophysiological basis that the T2 relaxation time prolongs with local edema resulting from acute allograft rejection. Using breath-held cardiac magnetic resonance T2 mapping at 1.5 T, Usman et al. (CircCardiovascImaging2012) detected moderate allograft rejection (grade 2R, ISHLT 2004). With modern immunosuppression grade 2R rejection has become a rare event, but the need remains for a technique that permits the discrimination of absent (grade 0R) and mild rejection (grade 1R). We therefore investigated whether an increase of magnetic field strength to 3T and the use of real-time navigator-gated respiration compensation allow for an increase in the sensitivity of T2 relaxation time detection that is necessary to achieve this discrimination. Methods: Eighteen patients received EMB (Tan et al., ArchPatholLabMed2007) and cardiac T2 mapping on the same day. Reading of T2 maps was blinded to the histological results. For final analysis, 3 cases with known 2R rejection at the time of T2 mapping were added, yielding 21 T2 mapping sessions. A respiration-navigator-gated radial gradient-recalled-echo pulse sequence (resolution 1.17 mm2, matrix 2562, trigger time 3 heartbeats, T2 preparation duration TET2 Prep = 60/30/0 ms) was applied to obtain 3 short-axis T2 maps (van Heeswijk et al., JACCCardiovascImaging2012), which were segmented according to AHA guidelines (Cerqueira et al, Circulation2001). The highest segmental T2 values were grouped according to histological rejection grade and differences were analyzed by Student's t-test, except for the non-blinded cases with 2R rejection. The degree of discrimination was determined using the Spearman's ranked correlation test. Results: The high-quality T2 maps allowed for visual differentiation of the rejection degrees (Figure 1), and the correlation of T2 mapping with the histological grade of acute cellular rejection was significant (Spearman's r = 0.56, p = 0.007). The 0R (n = 15) and 1R (n = 3) degrees demonstrated significantly different T2 values (46.9 ± 5.0 and 54.3 ± 3.0 ms, p = 0.02, Figure 2). Cases with 2R rejection showed clear T2 elevation (T2 = 60.3 ± 16.2 ms). Conclusions: This pilot study demonstrates that non-invasive free-breathing cardiac T2 mapping at 3T discriminates between no and mild cardiac allograft rejection. Confirmation of these encouraging results in a larger cohort should consider a study able to show equivalency or superiority of T2 mapping.
Resumo:
A preliminary understanding into the phenotypic effect of DNA segment copy number variation (CNV) is emerging. These rearrangements were demonstrated to influence, in a somewhat dose-dependent manner, the expression of genes that map within them. They were also shown to modify the expression of genes located on their flanks and sometimes those at a great distance from their boundary. Here we demonstrate, by monitoring these effects at multiple life stages, that these controls over expression are effective throughout mouse development. Similarly, we observe that the more specific spatial expression patterns of CNV genes are maintained through life. However, we find that some brain-expressed genes mapping within CNVs appear to be under compensatory loops only at specific time points, indicating that the effect of CNVs on these genes is modulated during development. Notably, we also observe that CNV genes are significantly enriched within transcripts that show variable time courses of expression between strains. Thus, modifying the copy number of a gene may potentially alter not only its expression level, but also the timing of its expression.
Resumo:
This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc.) of individual datasets. The proposed method uses completed local binary pattern (CLBP), grey level co-occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN), neural network (NN), support vector machine (SVM) or probability density weighted mean distance (PDWMD) is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
The search for low subjectivity area estimates has increased the use of remote sensing for agricultural monitoring and crop yield prediction, leading to more flexibility in data acquisition and lower costs comparing to traditional methods such as census and surveys. Low spatial resolution satellite images with higher frequency in image acquisition have shown to be adequate for cropland mapping and monitoring in large areas. The main goal of this study was to map the Summer crops in the State of Paraná, Brazil, using 10-day composition of NDVI SPOT Vegetation data for 2005/2006, 2006/2007 and 2007/2008 cropping seasons. For this, a supervised digital classification method with Parallelepiped algorithm in multitemporal RGB image composites was used, in order to generate masks of Summer cultures for each 10-day composition. Accuracy assessment was performed using Kappa index, overall accuracy and Willmott's concordance index, resulting in good levels of accuracy. This methodology allowed the accomplishment, with free and low resolution data, of the mapping of Summer cultures at State level.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
This study explored one university's response to the internationalization of higher education. Case study methodology was employed through a review of current and archival documents and interviews with key actors in the international spheres of the university. The historical, current, and future contexts were considered to situate the case study on a time line. Data analysis revealed that there were several points of division among the university community related to the response to internationalization, but also a major point of coherence in the centrality of inter-cultural understanding in efforts to internationalize. Other key findings included strengths, areas for improvement, and future directions of the university's response to internationalization. All of these findings were contextualized in findings related to the history of the university. In addition to these major findings, three themes in relation to the vision for internationalization at the institution were revealed: ( a) intercultural understanding, (b) the comprehensive status of the university, and (c) the financial benefits of internationalization. Recommendations are made for practice at the university in order to clarify this vision to develop a clear foundation from which to further build a response to internationalization that is solidly based on inter-cultural understanding, and recommendations for future research into the process of internationalization at the institutional level in Canada are suggested.
Resumo:
La cartographie peptidique est une technique de grande importance utilisée lors de l’identification des protéines et la caractérisation des modifications post-traductionnelles des protéines. Deux méthodes sont utilisées afin de couper les protéines en peptides pour la cartographie : les méthodes chimiques et les méthodes enzymatiques. Dans ce projet, l’enzyme chymotrypsine a été utilisée pour l’hydrolyse (la digestion) des liens peptidiques. Cependant, l’autoprotéolyse des enzymes peut augmenter la complexité des échantillons, rendant ainsi ardue l’obtention de pics résolus suite à l’apparition de pics non-désirés dans la carte peptidique. Par conséquent, nous avons utilisé la réticulation des enzymes protéolytiques par réaction avec le glutaraldéhyde (GA) donnant une enzyme insoluble afin de réduire l’autoprotéolyse. L’immobilisation de la chymotrypsine par GA a été effectuée selon une méthode rapportée précédemment par le groupe Waldron. L’électrophorèse capillaire (CE) couplée à l’absorption UV-visible a été utilisée pour la séparation et la détection de peptides et pour obtenir ainsi une cartographie peptidique. Deux tampons différents ont été évalués afin d’obtenir les meilleures conditions pour la digestion de substrats protéiques par la chymotrypsine libre (soluble) ou la GAchymotrypsine et l’analyse par CE. Les cartes des peptides autoprotéolytiques ont été comparées entre les deux formats de chymotrypsine. Afin d’améliorer la cartographie peptidique, nous avons évalué trois méthodes de conditionnement du capillaire CE et deux méthodes pour stopper la digestion. Le bicarbonate d’ammonium s’est avéré être le tampon optimal pour la digestion en solution et l’utilisation d’un bain d’acétone et de glace sèche s’est avérée être la méthode optimale pour stopper la digestion. Une solution de SDS, 25 mM, dans l’étape de rinçage a été utilisée après chaque analyse CE et a permis d’améliorer la résolution des cartes peptidiques. La comparaison entre l’autoprotéolyse de la chymotrypsine libre et de celle immobilisé par GA a été effectuée par des tests utilisant une gamme de six différentes combinaisons de conditions afin d’évaluer le temps (30 et 240 min) et la température de digestion (4, 24 et 37°C). Dans ces conditions, nos résultats ont confirmé que le GA-chymotrypsine réduit l’autoprotéolyse par rapport à l’enzyme libre. La digestion (à 37°C/240 min) de deux substrats modèles par la chymotrypsine libre et immobilisée en fonction de la température de dénaturation du substrat a été étudiée. iii Avant la digestion, les substrats (l’albumine de sérum bovine, BSA, et la myoglobine) ont été dénaturés par chauffage pendant 45 min à trois températures différentes (60, 75 et 90°C). Les résultats ont démontré que la dénaturation par chauffage du BSA et de la myoglobine n’a pas amélioré la cartographie peptidique pour la GA-chymotrypsine, tandis que la digestion de ceux-ci en présence de la chymotrypsine libre a amélioré de façon quantifiable à des températures élevées. Ainsi, le chauffage du substrat à 90°C avec l’enzyme soluble facilite le dépliement partiel du substrat et sa digestion limitée, ce qui a été mieux pour la myoglobine que pour la BSA.
Resumo:
La digestion enzymatique des protéines est une méthode de base pour les études protéomiques ainsi que pour le séquençage en mode « bottom-up ». Les enzymes sont ajoutées soit en solution (phase homogène), soit directement sur le gel polyacrylamide selon la méthode déjà utilisée pour l’isolation de la protéine. Les enzymes protéolytiques immobilisées, c’est-à-dire insolubles, offrent plusieurs avantages tels que la réutilisation de l’enzyme, un rapport élevé d’enzyme-sur-substrat, et une intégration facile avec les systèmes fluidiques. Dans cette étude, la chymotrypsine (CT) a été immobilisée par réticulation avec le glutaraldehyde (GA), ce qui crée des particules insolubles. L’efficacité d’immobilisation, déterminée par spectrophotométrie d’absorbance, était de 96% de la masse totale de la CT ajouté. Plusieurs différentes conditions d’immobilisation (i.e., réticulation) tels que la composition/pH du tampon et la masse de CT durant la réticulation ainsi que les différentes conditions d’entreposage tels que la température, durée et humidité pour les particules GA-CT ont été évaluées par comparaison des cartes peptidiques en électrophorèse capillaire (CE) des protéines standards digérées par les particules. Les particules de GA-CT ont été utilisés pour digérer la BSA comme exemple d’une protéine repliée large qui requit une dénaturation préalable à la digestion, et pour digérer la caséine marquée avec de l’isothiocyanate de fluorescéine (FITC) comme exemple d’un substrat dérivé afin de vérifier l’activité enzymatique du GA-CT dans la présence des groupements fluorescents liés au substrat. La cartographie peptidique des digestions par les particules GA-CT a été réalisée par CE avec la détection par absorbance ultraviolet (UV) ou fluorescence induite par laser. La caséine-FITC a été, en effet, digérée par GA-CT au même degré que par la CT libre (i.e., soluble). Un microréacteur enzymatique (IMER) a été fabriqué par immobilisation de la CT dans un capillaire de silice fondu du diamètre interne de 250 µm prétraité avec du 3-aminopropyltriéthoxysilane afin de fonctionnaliser la paroi interne avec les groupements amines. Le GA a été réagit avec les groupements amine puis la CT a été immobilisée par réticulation avec le GA. Les IMERs à base de GA-CT étaient préparé à l’aide d’un système CE automatisé puis utilisé pour digérer la BSA, la myoglobine, un peptide ayant 9 résidus et un dipeptide comme exemples des substrats ayant taille large, moyenne et petite, respectivement. La comparaison des cartes peptidiques des digestats obtenues par CE-UV ou CE-spectrométrie de masse nous permettent d’étudier les conditions d’immobilisation en fonction de la composition et le pH du tampon et le temps de réaction de la réticulation. Une étude par microscopie de fluorescence, un outil utilisé pour examiner l’étendue et les endroits d’immobilisation GA-CT dans l’IMER, ont montré que l’immobilisation a eu lieu majoritairement sur la paroi et que la réticulation ne s’est étendue pas si loin au centre du capillaire qu’anticipée.
Resumo:
For the discrete-time quadratic map xt+1=4xt(1-xt) the evolution equation for a class of non-uniform initial densities is obtained. It is shown that in the t to infinity limit all of them approach the invariant density for the map.
Resumo:
Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
There has been considerable interest recently in the teaching of skills to undergraduate students. However, existing methods for collating data on how much, where and when students are taught and assessed skills have often been shown to be time-consuming and ineffective. Here, we outline an electronic research skills audit tool that has been developed to map both transferable and discipline-specific skills teaching and assessment within individual modules, the results of which can be collated and analysed across entire degree programmes. The design and use of the audit tool is described in detail and a bioscience case study is presented to illustrate the types of data that can be collected. The audit tool has been designed as a time-effective way of collecting information on skills teaching and assessment, but also actively encourages staff to reflect on their teaching and learning practices. Conclusions are drawn about the practicalities of using the audit tool and its importance in both curriculum design and as a resource to encourage dialogue with graduate employers.
Resumo:
Seed set of rice (Oryza sativa L.) is highly sensitive to short episodes of high temperature at anthesis events that are likely to be more frequent in future climates. Breeding for tolerance is therefore an essential component of adaptation to climate variability and change. Experiments were conducted in 2003 and 2004 at optimum (30 degrees C daytime) and high (35 and 38 degrees C) air temperature using parents of some prominent mapping populations (i) to determine whether there were differences in the daily flowering pattern and hence a potential heat avoidance mechanism, and (ii) to identify rice genotypes having true heat tolerance during anthesis, that is, high seed set in spikelets exposed to high temperature. Rice cultivar CG14 (O. glaberrima) reached peak anthesis earlier in the morning (1.5 h after dawn) under both control (30 degrees C) and high (38 degrees C) temperature conditions than O. sativa genotypes (>= 3 h after dawn). Exposure to high temperature (centered on the time of peak anthesis) for 6 h reduced spikelet fertility more than exposure for 2 h, and fertility was lower at 38 degrees C than at 35 degrees C. Genotypic ranking for spikelet fertility at 35 and 38 degrees C was highly correlated in both 2003 and 2004. Fertility was also highly correlated across years, suggesting a consistent and reproducible response of spikelet fertility to temperature. The check cultivar N22 was the most heat tolerant genotype (64-86% fertility at 38 degrees C) and cultivars Azucena and Moroberekan the most susceptible (<8%).