883 resultados para non separable data
Resumo:
The first line imaging of the non-traumatic brachial plexus is by MRI. Knowledge of the anatomy and commonest variants is essential. Three Tesla imaging offers the possibility of 3D isotropic sequences with excellent spatial and contrast enhancement resolutions, which leads to time saving and quality boosting. The most commonly seen conditions are benign tumor lesions and radiation damage. Gadolinium is required to assess inflammatory or tumour plexopathy. MRI data should be correlated with FDG-PET if tumor recurrence is suspected.
Resumo:
BACKGROUND: By analyzing human immunodeficiency virus type 1 (HIV-1) pol sequences from the Swiss HIV Cohort Study (SHCS), we explored whether the prevalence of non-B subtypes reflects domestic transmission or migration patterns. METHODS: Swiss non-B sequences and sequences collected abroad were pooled to construct maximum likelihood trees, which were analyzed for Swiss-specific subepidemics, (subtrees including ≥80% Swiss sequences, bootstrap >70%; macroscale analysis) or evidence for domestic transmission (sequence pairs with genetic distance <1.5%, bootstrap ≥98%; microscale analysis). RESULTS: Of 8287 SHCS participants, 1732 (21%) were infected with non-B subtypes, of which A (n = 328), C (n = 272), CRF01_AE (n = 258), and CRF02_AG (n = 285) were studied further. The macroscale analysis revealed that 21% (A), 16% (C), 24% (CRF01_AE), and 28% (CRF02_AG) belonged to Swiss-specific subepidemics. The microscale analysis identified 26 possible transmission pairs: 3 (12%) including only homosexual Swiss men of white ethnicity; 3 (12%) including homosexual white men from Switzerland and partners from foreign countries; and 10 (38%) involving heterosexual white Swiss men and females of different nationality and predominantly nonwhite ethnicity. CONCLUSIONS: Of all non-B infections diagnosed in Switzerland, <25% could be prevented by domestic interventions. Awareness should be raised among immigrants and Swiss individuals with partners from high prevalence countries to contain the spread of non-B subtypes.
Resumo:
Gene expression changes may underlie much of phenotypic evolution. The development of high-throughput RNA sequencing protocols has opened the door to unprecedented large-scale and cross-species transcriptome comparisons by allowing accurate and sensitive assessments of transcript sequences and expression levels. Here, we review the initial wave of the new generation of comparative transcriptomic studies in mammals and vertebrate outgroup species in the context of earlier work. Together with various large-scale genomic and epigenomic data, these studies have unveiled commonalities and differences in the dynamics of gene expression evolution for various types of coding and non-coding genes across mammalian lineages, organs, developmental stages, chromosomes and sexes. They have also provided intriguing new clues to the regulatory basis and phenotypic implications of evolutionary gene expression changes.
Resumo:
Glioblastomas are highly diffuse, malignant tumors that have so far evaded clinical treatment. The strongly invasive behavior of cells in these tumors makes them very resistant to treatment, and for this reason both experimental and theoretical efforts have been directed toward understanding the spatiotemporal pattern of tumor spreading. Although usual models assume a standard diffusion behavior, recent experiments with cell cultures indicate that cells tend to move in directions close to that of glioblastoma invasion, thus indicating that a biasedrandom walk model may be much more appropriate. Here we show analytically that, for realistic parameter values, the speeds predicted by biased dispersal are consistent with experimentally measured data. We also find that models beyond reaction–diffusion–advection equations are necessary to capture this substantial effect of biased dispersal on glioblastoma spread
Resumo:
The proposal to work on this final project came after several discussions held with Dr. Elzbieta Malinowski Gadja, who in 2008 published the book entitled Advanced Data Warehouse Design: From Conventional to Spatial and Temporal Applications (Data-Centric Systems and Applications). The project was carried out under the technical supervision of Dr. Malinowski and the direct beneficiary was the University of Costa Rica (UCR) where Dr. Malinowski is a professor at the Department of Computer Science and Informatics. The purpose of this project was twofold: First, to translate chapter III of said book with the intention of generating educational material for the use of the UCR and, second, to venture in the field of technical translation related to data warehouse. For the first component, the goal was to generate a final product that would eventually serve as an educational tool for the post-graduate courses of the UCR. For the second component, this project allowed me to acquire new skills and put into practice techniques that have helped me not only to perfom better in my current job as an Assistant Translator of the Inter-American BAnk (IDB), but also to use them in similar projects. The process was lenggthy and required torough research and constant communication with the author. The investigation focused on the search of terms and definitions to prepare the glossary, which was the basis to start the translation project. The translation process itself was carried out by phases, so that comments and corrections by the author could be taken into account in subsequent stages. Later, based on the glossary and the translated text, illustrations had been created in the Visio software were translated. In addition to the technical revision by the author, professor Carme Mangiron was in charge of revising the non-technical text. The result was a high-quality document that is currently used as reference and study material by the Department of Computer Science and Informatics of Costa Rica.
Resumo:
A common problem in video surveys in very shallow waters is the presence of strong light fluctuations, due to sun light refraction. Refracted sunlight casts fast moving patterns, which can significantly degrade the quality of the acquired data. Motivated by the growing need to improve the quality of shallow water imagery, we propose a method to remove sunlight patterns in video sequences. The method exploits the fact that video sequences allow several observations of the same area of the sea floor, over time. It is based on computing the image difference between a given reference frame and the temporal median of a registered set of neighboring images. A key observation is that this difference will have two components with separable spectral content. One is related to the illumination field (lower spatial frequencies) and the other to the registration error (higher frequencies). The illumination field, recovered by lowpass filtering, is used to correct the reference image. In addition to removing the sunflickering patterns, an important advantage of the approach is the ability to preserve the sharpness in corrected image, even in the presence of registration inaccuracies. The effectiveness of the method is illustrated in image sets acquired under strong camera motion containing non-rigid benthic structures. The results testify the good performance and generality of the approach
Resumo:
Contamination of weather radar echoes by anomalous propagation (anaprop) mechanisms remains a serious issue in quality control of radar precipitation estimates. Although significant progress has been made identifying clutter due to anaprop there is no unique method that solves the question of data reliability without removing genuine data. The work described here relates to the development of a software application that uses a numerical weather prediction (NWP) model to obtain the temperature, humidity and pressure fields to calculate the three dimensional structure of the atmospheric refractive index structure, from which a physically based prediction of the incidence of clutter can be made. This technique can be used in conjunction with existing methods for clutter removal by modifying parameters of detectors or filters according to the physical evidence for anomalous propagation conditions. The parabolic equation method (PEM) is a well established technique for solving the equations for beam propagation in a non-uniformly stratified atmosphere, but although intrinsically very efficient, is not sufficiently fast to be practicable for near real-time modelling of clutter over the entire area observed by a typical weather radar. We demonstrate a fast hybrid PEM technique that is capable of providing acceptable results in conjunction with a high-resolution terrain elevation model, using a standard desktop personal computer. We discuss the performance of the method and approaches for the improvement of the model profiles in the lowest levels of the troposphere.
Resumo:
Context.LS 5039 has been observed with several X-ray instruments so far showing quite steady emission in the long term and no signatures of accretion disk. The source also presents X-ray variability at orbital timescales in flux and photon index. The system harbors an O-type main sequence star with moderate mass-loss. At present, the link between the X-rays and the stellar wind is unclear. Aims.We study the X-ray fluxes, spectra, and absorption properties of LS 5039 at apastron and periastron passages during an epoch of enhanced stellar mass-loss, and the long term evolution of the latter in connection with the X-ray fluxes. Methods.New XMM-Newton observations were performed around periastron and apastron passages in September 2005, when the stellar wind activity was apparently higher. April 2005 Chandra observations on LS 5039 were revisited. Moreover, a compilation of H EW data obtained since 1992, from which the stellar mass-loss evolution can be approximately inferred, was carried out. Results.XMM-Newton observations show higher and harder emission around apastron than around periastron. No signatures of thermal emission or a reflection iron line indicating the presence of an accretion disk are found in the spectrum, and the hydrogen column density () is compatible with being the same in both observations and consistent with the interstellar value. 2005 Chandra observations show a hard X-ray spectrum, and possibly high fluxes, although pileup effects preclude conclusive results from being obtained. The H EW shows yearly variations of 10%, and does not seem to be correlated with X-ray fluxes obtained at similar phases, unlike what is expected in the wind accretion scenario. Conclusions.2005 XMM-Newton and Chandra observations are consistent with 2003 RXTE/PCA results, namely moderate flux and spectral variability at different orbital phases. The constancy of the seems to imply that either the X-ray emitter is located at 1012 cm from the compact object, or the density in the system is 3 to 27 times smaller than that predicted by a spherical symmetric wind model. We suggest that the multiwavelength non-thermal emission of LS 5039 is related to the observed extended radio jets and is unlikely to be produced inside the binary system.
Resumo:
This paper explores the possibility of using data from social bookmarking services to measure the use of information by academic researchers. Social bookmarking data can be used to augment participative methods (e.g. interviews and surveys) and other, non-participative methods (e.g. citation analysis and transaction logs) to measure the use of scholarly information. We use BibSonomy, a free resource-sharing system, as a case study. Results show that published journal articles are by far the most popular type of source bookmarked, followed by conference proceedings and books. Commercial journal publisher platforms are the most popular type of information resource bookmarked, followed by websites, records in databases and digital repositories. Usage of open access information resources is low in comparison with toll access journals. In the case of open access repositories, there is a marked preference for the use of subject-based repositories over institutional repositories. The results are consistent with those observed in related studies based on surveys and citation analysis, confirming the possible use of bookmarking data in studies of information behaviour in academic settings. The main advantages of using social bookmarking data are that is an unobtrusive approach, it captures the reading habits of researchers who are not necessarily authors, and data are readily available. The main limitation is that a significant amount of human resources is required in cleaning and standardizing the data.
Resumo:
BACKGROUND: Previous published studies have shown significant variations in colonoscopy performance, even when medical factors are taken into account. This study aimed to examine the role of nonmedical factors (ie, embodied in health care system design) as possible contributors to variations in colonoscopy performance. METHODS: Patient data from a multicenter observational study conducted between 2000 and 2002 in 21 centers in 11 western countries were used. Variability was captured through 2 performance outcomes (diagnostic yield and colonoscopy withdrawal time), jointly studied as dependent variables, using a multilevel 2-equation system. RESULTS: Results showed that open-access systems and high-volume colonoscopy centers were independently associated with a higher likelihood of detecting significant lesions and longer withdrawal durations. Fee for service (FFS) payment was associated with shorter withdrawal durations, and so had an indirect negative impact on the diagnostic yield. Teaching centers exhibited lower detection rates and longer withdrawal times. CONCLUSIONS: Our results suggest that gatekeeping colonoscopy is likely to miss patients with significant lesions and that developing specialized colonoscopy units is important to improve performance. Results also suggest that FFS may result in a lower quality of care in colonoscopy practice and highlight the fact that longer withdrawal times do not necessarily indicate higher quality in teaching centers.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
BACKGROUND: The aim of our study was to assess the feasibility of minimally invasive digestive anastomosis using a modular flexible magnetic anastomotic device made up of a set of two flexible chains of magnetic elements. The assembly possesses a non-deployed linear configuration which allows it to be introduced through a dedicated small-sized applicator into the bowel where it takes the deployed form. A centering suture allows the mating between the two parts to be controlled in order to include the viscerotomy between the two magnetic rings and the connected viscera. METHODS AND PROCEDURES: Eight pigs were involved in a 2-week survival experimental study. In five colorectal anastomoses, the proximal device was inserted by a percutaneous endoscopic technique, and the colon was divided below the magnet. The distal magnet was delivered transanally to connect with the proximal magnet. In three jejunojejunostomies, the first magnetic chain was injected in its linear configuration through a small enterotomy. Once delivered, the device self-assembled into a ring shape. A second magnet was injected more distally through the same port. The centering sutures were tied together extracorporeally and, using a knot pusher, magnets were connected. Ex vivo strain testing to determine the compression force delivered by the magnetic device, burst pressure of the anastomosis, and histology were performed. RESULTS: Mean operative time including endoscopy was 69.2 ± 21.9 min, and average time to full patency was 5 days for colorectal anastomosis. Operative times for jejunojejunostomies were 125, 80, and 35 min, respectively. The postoperative period was uneventful. Burst pressure of all anastomoses was ≥ 110 mmHg. Mean strain force to detach the devices was 6.1 ± 0.98 and 12.88 ± 1.34 N in colorectal and jejunojejunal connections, respectively. Pathology showed a mild-to-moderate inflammation score. CONCLUSIONS: The modular magnetic system showed enormous potential to create minimally invasive digestive anastomoses, and may represent an alternative to stapled anastomoses, being easy to deliver, effective, and low cost.
Resumo:
Rapport de synthèse : La présence de trois canaux d'eau, appelés aquaporines AQP1, AQP4 et AQP9, a été observée dans le cerveau sain ainsi que dans plusieurs modèles des pathologies cérébrales des rongeurs. Peu est connu sur la distribution des AQP dans le cerveau des primates. Cette connaissance sera utile pour des futurs essaies médicamenteux qui visent à prévenir la formation des oedèmes cérébraux. Nous avons étudié l'expression et la distribution cellulaire des AQP1, 4 et 9 dans le cerveau primate non-humain. La distribution des AQP4 dans le cerveau primate non-humain a été observée dans des astrocytes périvasculaires, comparable à l'observation faite dans le cerveau du rongeur. Contrairement à ce qui a été décrit chez le rongeur, l'AQPI chez le primate est exprimée dans les processus et dans les prolongations périvasculaires d'un sous-type d'astrocytes, qui est avant tout localisé dans la matière blanche et dans la glia limitans et qui est peut-être impliqué dans l'homéostasie de l'eau. L'AQPI a aussi été observée dans les neurones qui innervent des vaisseaux sanguins de la pie-mère, suggérant un rôle possible dans la régularisation de la vascularisation cérébrale. Comme décrit chez le rongeur, le mRNA et les protéines de l'AQP9 ont été détectés dans des astrocytes et dans des neurones catécholaminergiques. Chez le primate, des localisations supplémentaires ont été observées dans des populations de neurones placées dans certaines zones corticales. Cet article décrit une étude détaillée sur la distribution des AQP1, 4 et 9 dans le cerveau primate non-humain. Les observations faites s'additionnent aux data déjà publié sur le cerveau du rongeur. Ces importantes différences entre les espèces doivent être considérées dans l'évaluation des médicaments qui agiront potentiellement sur des AQP des primates non-humains avant d'entrer dans la phase des essais cliniques sur des humains.
Resumo:
The maximal aerobic capacity while running and cycling was measured in 22 prepubertal children (mean age +/- SD 9.5 +/- 0.8 years): 14 obese (47.3 +/- 10 kg) and 8 non-obese (31.1 +/- 6.1 kg). Oxygen consumption (VO2) and carbon dioxide production were measured by an open circuit method. Steady state VO2 was determined at different levels of exercise up to the maximal power on the cycloergometer (92 W in obese and 77 W in non-obese subjects) and up to the maximal running speed on the treadmill at a 2% slope (8.3 km/h in obese and 9.0 km/h in lean children). Expressed in absolute values, the VO2max in obese children was significantly higher than in controls (1.55 +/- 0.29 l/min versus 1.23 +/- 0.22 l/min, p < 0.05) for the treadmill test and comparable in the two groups (1.4 +/- 0.2 l/min versus 1.16 +/- 0.2 l/min, ns) for the cycloergometer test. When VO2max was expressed per kg fat free mass, the difference between the two groups disappeared for both tests. These data suggest that obese children had no limitation of maximal aerobic power. Therefore, the magnitude of the workload prescribed when a physical activity program is intended for the therapy of childhood obesity, it should be designed to increase caloric output rather than to improve cardiorespiratory fitness.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.