913 resultados para morphological component analysis (MCA)
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
AimOur aim was to understand the interplay of heterogeneous climatic and spatial landscapes in shaping the distribution of nuclear microsatellite variation in burrowing parrots, Cyanoliseus patagonus. Given the marked phenotypic differences between populations of burrowing parrots we hypothesized an important role of geographical as well climatic heterogeneity in the population structure of this species. LocationSouthern South America. MethodsWe applied a landscape genetics approach to investigate the explicit patterns of genetic spatial autocorrelation based on both geography and climate using spatial principal component analysis (sPCA). This necessitated a novel statistical estimation of the species climatic landscape, considering temperature- and precipitation-based variables separately to evaluate their weight in shaping the distribution of genetic variation in our model system. ResultsGeographical and climatic heterogeneity successfully explained molecular variance in burrowing parrots. sPCA divided the species distribution into two main areas, Patagonia and the pre-Andes, which were connected by an area of geographical and climatic transition. Moreover, sPCA revealed cryptic and conservation-relevant genetic structure: the pre-Andean populations and the transition localities were each divided into two groups, each management units for conservation. Main conclusionssPCA, a method originally developed for spatial genetics, allowed us to unravel the genetic structure related to spatial and climatic landscapes and to visualize these patterns in landscape space. These novel climatic inferences underscore the importance of our modified sPCA approach in revealing how climatic variables can drive cryptic patterns of genetic structure, making the approach potentially useful in the study of any species distributed over a climatically heterogeneous landscape.
Resumo:
We performed a spatiotemporal analysis of a network of 21 Scots pine (Pinus sylvestris) ring-width chronologies in northern Fennoscandia by means of chronology statistics and multivariate analyses. Chronologies are located on both sides (western and eastern) of the Scandes Mountains (67°N-70°N, 15°E-29°E). Growth relationships with temperature, precipitation, and North Atlantic Oscillation (NAO) indices were calculated for the period 1880-1991. We also assessed their temporal stability. Current July temperature and, to a lesser degree, May precipitation are the main growth limiting factors in the whole area of study. However, Principal Component Analysis (PCA) and mean interseries correlation revealed differences in radial growth between both sides of the Scandes Mountains, attributed to the Oceanic-Continental climatic gradient in the area. The gradient signal is temporally variable and has strengthened during the second half of the 20th century. Northern Fennoscandia Scots pine growth is positively related to early winter NAO indices previous to the growth season and to late spring NAO. NAO/growth relationships are unstable and have dropped in the second half of the 20th century. Moreover, they are noncontinuous through the range of NAO values: for early winter, only positive NAO indices enhance tree growth in the next growing season, while negative NAO does not. For spring, only negative NAO is correlated with radial growth.
Resumo:
The sensory, physical and chemical characteristics of 'Douradão' peaches cold stored in different modified atmosphere packaging (LDPE bags of 30, 50, 60, 75µm thickness) were studied. After 14, 21 and 28 days of cold storage (1 ± 1 ºC and 90 ± 5% RH), samples were withdrawn from MAP and kept during 4 days in ambient air for ripening. Descriptive terminology and sensory profile of the peaches were developed by methodology based on the Quantitative Descriptive Analysis (QDA). The assessors consensually defined the sensory descriptors, their respective reference materials and the descriptive evaluation ballot. Fourteen individuals were selected as judges based on their discrimination capacity and reproducibility. Seven descriptors were generated showing similarities and differences among the samples. The data were analysed by ANOVA, Tukey test and Principal Component Analysis (PCA). The atmospheres that developed inside the different packaging materials during cold storage differed significantly. The PCA showed that MA50 and MA60 treatments were more characterized by the fresh peach flavour, fresh appearance, juiciness and flesh firmness, and were effective for keeping good quality of 'Douradão' peaches during 28 d of cold storage. The Control and MA30 treatments were characterized by the mealiness, the MA75 treatment showed lower intensity for all attributes evaluated and they were ineffective to maintain good quality of the fruits during cold storage. Higher correlation coefficients (positive) were found between fresh appearance and flesh firmness (0.95), fresh appearance and juiciness (0.97), ratio and intensity of fresh peach smell (0.81), as well as higher correlation coefficients (negative) between Hue angle and intensity of yellow colour (-0.91), fresh appearance and mealiness (-0.92), juiciness and mealiness (-0.95), firmness and mealiness (-0.94).
Resumo:
In traffic accidents involving motorcycles, paint traces can be transferred from the rider's helmet or smeared onto its surface. These traces are usually in the form of chips or smears and are frequently collected for comparison purposes. This research investigates the physical and chemical characteristics of the coatings found on motorcycles helmets. An evaluation of the similarities between helmet and automotive coating systems was also performed.Twenty-seven helmet coatings from 15 different brands and 22 models were considered. One sample per helmet was collected and observed using optical microscopy. FTIR spectroscopy was then used and seven replicate measurements per layer were carried out to study the variability of each coating system (intravariability). Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were also performed on the infrared spectra of the clearcoats and basecoats of the data set. The most common systems were composed of two or three layers, consistently involving a clearcoat and basecoat. The coating systems of helmets with composite shells systematically contained a minimum of three layers. FTIR spectroscopy results showed that acrylic urethane and alkyd urethane were the most frequent binders used for clearcoats and basecoats. A high proportion of the coatings were differentiated (more than 95%) based on microscopic examinations. The chemical and physical characteristics of the coatings allowed the differentiation of all but one pair of helmets of the same brand, model and color. Chemometrics (PCA and HCA) corroborated classification based on visual comparisons of the spectra and allowed the study of the whole data set at once (i.e., all spectra of the same layer). Thus, the intravariability of each helmet and its proximity to the others (intervariability) could be more readily assessed. It was also possible to determine the most discriminative chemical variables based on the study of the PCA loadings. Chemometrics could therefore be used as a complementary decision-making tool when many spectra and replicates have to be taken into account. Similarities between automotive and helmet coating systems were highlighted, in particular with regard to automotive coating systems on plastic substrates (microscopy and FTIR). However, the primer layer of helmet coatings was shown to differ from the automotive primer. If the paint trace contains this layer, the risk of misclassification (i.e., helmet versus vehicle) is reduced. Nevertheless, a paint examiner should pay close attention to these similarities when analyzing paint traces, especially regarding smears or paint chips presenting an incomplete layer system.
Resumo:
The present study evaluated the sensory quality of chocolates obtained from two cocoa cultivars (PH16 and SR162) resistant to Moniliophtora perniciosa mould comparing to a conventional cocoa that is not resistant to the disease. The acceptability of the chocolates was assessed and the promising cultivars with relevant sensory and commercial attributes could be indicated to cocoa producers and chocolate manufacturers. The descriptive terminology and the sensory profile of chocolates were developed by Quantitative Descriptive Analysis (QDA). Ten panelists, selected on the basis of their discriminatory capacity and reproducibility, defined eleven sensory descriptors, their respective reference materials and the descriptive evaluation ballot. The data were analyzed using ANOVA, Principal Component Analysis (PCA) and Tukey's test to compare the means. The results revealed significant differences among the sensory profiles of the chocolates. Chocolates from the PH16 cultivar were characterized by a darker brown color, more intense flavor and odor of chocolate, bitterness and a firmer texture, which are important sensory and commercial attributes. Chocolates from the SR162 cultivar were characterized by a greater sweetness and melting quality and chocolates from the conventional treatment presented intermediate sensory characteristics between those of the other two chocolates. All samples indicated high acceptance, but chocolates from the PH16 and conventional cultivars obtained higher purchase intention scores.
Resumo:
Combining theories on social trust and social capital with sociopsychological approaches and applying contextual analyses to Swiss and European survey data, this thesis examines under what circumstances generalised trust, often understood as public good, may not benefit everyone, but instead amplify inequality. The empirical investigation focuses on the Swiss context, but considers different scales of analysis. Two broader questions are addressed. First, might generalised trust imply more or less narrow visions of community and solidarity in different contexts? Applying nonlinear principal component analysis to aggregate indicators, Study 1 explores inclusive and exclusive types of social capital in Europe, measured as regional configurations of generalised trust, civic participation and attitudes towards diversity. Study 2 employs multilevel models to examine how generalised trust, as an individual predisposition and an aggregate climate at the level of Swiss cantons, is linked to equality- directed collective action intention versus radical right support. Second, might high-trust climates impact negatively on disadvantaged members of society, precisely because they reflect a normative discourse of social harmony that impedes recognition of inequality? Study 3 compares how climates of generalised trust at the level of Swiss micro-regions and subjective perceptions of neighbourhood cohesion moderate the negative relationship between socio-economic disadvantage and mental health. Overall, demonstrating beneficial, as well as counterintuitive effects of social trust, this thesis proposes a critical and contextualised approach to the sources and dynamics of social cohesion in democratic societies. -- Cette thèse combine des théories sur le capital social et la confiance sociale avec des approches psychosociales et s'appuie sur des analyses contextuelles de données d'enquêtes suisses et européennes, afin d'étudier dans quelles circonstances la confiance généralisée, souvent présentée comme un bien public, pourrait ne pas bénéficier à tout le monde, mais amplifier les inégalités. Les études empiriques, centrées sur le contexte suisse, intègrent différentes échelles d'analyse et investiguent deux questions principales. Premièrement, la confiance généralisée implique-t-elle des visions plus ou moins restrictives de la communauté et de la solidarité selon le contexte? Dans l'étude 1, une analyse à composantes principales non-linéaire sur des indicateurs agrégés permet d'explorer des types de capital social inclusif et exclusif en Europe, mesurés par des configurations régionales de confiance généralisée, de participation civique, et d'attitudes envers la diversité. L'étude 2 utilise des modèles multiniveaux afin d'analyser comment la confiance généralisée, en tant que prédisposition individuelle et climat agrégé au niveau des cantons suisses, est associée à l'intention de participer à des actions collectives en faveur de l'égalité ou, au contraire, à l'intention de voter pour la droite radicale. Deuxièmement, des climats de haute confiance peuvent-ils avoir un impact négatif sur des membres désavantagés de la société, précisément parce qu'ils reflètent un discours normatif d'harmonie sociale qui empêche la reconnaissance des inégalités? L'étude 3 analyse comment des climats de confiance au niveau des micro-régions suisses et la perception subjective de faire partie d'un environnement cohésif modèrent la relation négative entre le désavantage socio-économique et la santé mentale. En démontrant des effets bénéfiques mais aussi contre-intuitifs de la confiance sociale, cette thèse propose une approche critique et contextualisée des sources et dynamiques de la cohésion sociale dans les sociétés démocratiques.
Resumo:
In this paper, a new algorithm for blind inversion of Wiener systems is presented. The algorithm is based on minimization of mutual information of the output samples. This minimization is done through a Minimization-Projection (MP) approach, using a nonparametric “gradient” of mutual information.
Resumo:
This paper proposes a very simple method for increasing the algorithm speed for separating sources from PNL mixtures or invertingWiener systems. The method is based on a pertinent initialization of the inverse system, whose computational cost is very low. The nonlinear part is roughly approximated by pushing the observations to be Gaussian; this method provides a surprisingly good approximation even when the basic assumption is not fully satisfied. The linear part is initialized so that outputs are decorrelated. Experiments shows the impressive speed improvement.
Resumo:
In this present work, we are proposing a characteristics reduction system for a facial biometric identification system, using transformed domains such as discrete cosine transformed (DCT) and discrete wavelets transformed (DWT) as parameterization; and Support Vector Machines (SVM) and Neural Network (NN) as classifiers. The size reduction has been done with Principal Component Analysis (PCA) and with Independent Component Analysis (ICA). This system presents a similar success results for both DWT-SVM system and DWT-PCA-SVM system, about 98%. The computational load is improved on training mode due to the decreasing of input’s size and less complexity of the classifier.
Resumo:
Does Independent Component Analysis (ICA) denature EEG signals? We applied ICA to two groups of subjects (mild Alzheimer patients and control subjects). The aim of this study was to examine whether or not the ICA method can reduce both group di®erences and within-subject variability. We found that ICA diminished Leave-One- Out root mean square error (RMSE) of validation (from 0.32 to 0.28), indicative of the reduction of group di®erence. More interestingly, ICA reduced the inter-subject variability within each group (¾ = 2:54 in the ± range before ICA, ¾ = 1:56 after, Bartlett p = 0.046 after Bonfer- roni correction). Additionally, we present a method to limit the impact of human error (' 13:8%, with 75.6% inter-cleaner agreement) during ICA cleaning, and reduce human bias. These ¯ndings suggests the novel usefulness of ICA in clinical EEG in Alzheimer's disease for reduction of subject variability.
Resumo:
In this paper we propose the use of the independent component analysis (ICA) [1] technique for improving the classification rate of decision trees and multilayer perceptrons [2], [3]. The use of an ICA for the preprocessing stage, makes the structure of both classifiers simpler, and therefore improves the generalization properties. The hypothesis behind the proposed preprocessing is that an ICA analysis will transform the feature space into a space where the components are independent, and aligned to the axes and therefore will be more adapted to the way that a decision tree is constructed. Also the inference of the weights of a multilayer perceptron will be much easier because the gradient search in the weight space will follow independent trajectories. The result is that classifiers are less complex and on some databases the error rate is lower. This idea is also applicable to regression
Resumo:
Background: Differences in the distribution of genotypes between individuals of the same ethnicity are an important confounder factor commonly undervalued in typical association studies conducted in radiogenomics. Objective: To evaluate the genotypic distribution of SNPs in a wide set of Spanish prostate cancer patients for determine the homogeneity of the population and to disclose potential bias. Design, Setting, and Participants: A total of 601 prostate cancer patients from Andalusia, Basque Country, Canary and Catalonia were genotyped for 10 SNPs located in 6 different genes associated to DNA repair: XRCC1 (rs25487, rs25489, rs1799782), ERCC2 (rs13181), ERCC1 (rs11615), LIG4 (rs1805388, rs1805386), ATM (rs17503908, rs1800057) and P53 (rs1042522). The SNP genotyping was made in a Biotrove OpenArrayH NT Cycler. Outcome Measurements and Statistical Analysis: Comparisons of genotypic and allelic frequencies among populations, as well as haplotype analyses were determined using the web-based environment SNPator. Principal component analysis was made using the SnpMatrix and XSnpMatrix classes and methods implemented as an R package. Non-supervised hierarchical cluster of SNP was made using MultiExperiment Viewer. Results and Limitations: We observed that genotype distribution of 4 out 10 SNPs was statistically different among the studied populations, showing the greatest differences between Andalusia and Catalonia. These observations were confirmed in cluster analysis, principal component analysis and in the differential distribution of haplotypes among the populations. Because tumor characteristics have not been taken into account, it is possible that some polymorphisms may influence tumor characteristics in the same way that it may pose a risk factor for other disease characteristics. Conclusion: Differences in distribution of genotypes within different populations of the same ethnicity could be an important confounding factor responsible for the lack of validation of SNPs associated with radiation-induced toxicity, especially when extensive meta-analysis with subjects from different countries are carried out.
Resumo:
Objective: To compare lower incisor dentoalveolar compensation and mandible symphysis morphology among Class I and Class III malocclusion patients with different facial vertical skeletal patterns. Materials and Methods: Lower incisor extrusion and inclination, as well as buccal (LA) and lingual (LP) cortex depth, and mandibular symphysis height (LH) were measured in 107 lateral cephalometric x-rays of adult patients without prior orthodontic treatment. In addition, malocclusion type (Class I or III) and facial vertical skeletal pattern were considered. Through a principal component analysis (PCA) related variables were reduced. Simple regression equation and multivariate analyses of variance were also used. Results: Incisor mandibular plane angle (P < .001) and extrusion (P = .03) values showed significant differences between the sagittal malocclusion groups. Variations in the mandibular plane have a negative correlation with LA (Class I P = .03 and Class III P = .01) and a positive correlation with LH (Class I P = .01 and Class III P = .02) in both groups. Within the Class III group, there was a negative correlation between the mandibular plane and LP (P = .02). PCA showed that the tendency toward a long face causes the symphysis to elongate and narrow. In Class III, alveolar narrowing is also found in normal faces. Conclusions: Vertical facial pattern is a significant factor in mandibular symphysis alveolar morphology and lower incisor positioning, both for Class I and Class III patients. Short-faced Class III patients have a widened alveolar bone. However, for long-faced and normal-faced Class III, natural compensation elongates the symphysis and influences lower incisor position.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.