900 resultados para ETL Conceptual and Logical Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a result of forensic investigations of problems across Iowa, a research study was developed aimed at providing solutions to identified problems through better management and optimization of the available pavement geotechnical materials and through ground improvement, soil reinforcement, and other soil treatment techniques. The overall goal was worked out through simple laboratory experiments, such as particle size analysis, plasticity tests, compaction tests, permeability tests, and strength tests. A review of the problems suggested three areas of study: pavement cracking due to improper management of pavement geotechnical materials, permeability of mixed-subgrade soils, and settlement of soil above the pipe due to improper compaction of the backfill. This resulted in the following three areas of study: (1) The optimization and management of earthwork materials through general soil mixing of various select and unsuitable soils and a specific example of optimization of materials in earthwork construction by soil mixing; (2) An investigation of the saturated permeability of compacted glacial till in relation to validation and prediction with the Enhanced Integrated Climatic Model (EICM); and (3) A field investigation and numerical modeling of culvert settlement. For each area of study, a literature review was conducted, research data were collected and analyzed, and important findings and conclusions were drawn. It was found that optimum mixtures of select and unsuitable soils can be defined that allow the use of unsuitable materials in embankment and subgrade locations. An improved model of saturated hydraulic conductivity was proposed for use with glacial soils from Iowa. The use of proper trench backfill compaction or the use of flowable mortar will reduce the potential for developing a bump above culverts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Atlas Mountains in Morocco are considered as type examples of intracontinental chains, with high topography that contrasts with moderate crustal shortening and thickening. Whereas recent geological studies and geodynamic modeling have suggested the existence of dynamic topography to explain this apparent contradiction, there is a lack of modern geophysical data at the crustal scale to corroborate this hypothesis. Newly-acquired magnetotelluric data image the electrical resistivity distribution of the crust from the Middle Atlas to the Anti-Atlas, crossing the tabular Moulouya Plain and the High Atlas. All the units show different and unique electrical signatures throughout the crust reflecting the tectonic history of development of each one. In the upper crust electrical resistivity values may be associated to sediment sequences in the Moulouya and Anti-Atlas and to crustal scale fault systems in the High Atlas developed during the Cenozoic times. In the lower crust the low resistivity anomaly found below the Mouluya plain, together with other geophysical (low velocity anomaly, lack of earthquakes and minimum Bouguer anomaly) and geochemical (Neogene-Quaternary intraplate alkaline volcanic fields) evidence, infer the existence of a small degree of partial melt at the base of the lower crust. The low resistivity anomaly found below the Anti-Atlas may be associated with a relict subduction of Precambrian oceanic sediments, or to precipitated minerals during the release of fluids from the mantle during the accretion of the Anti-Atlas to the West African Supercontinent during the Panafrican orogeny ca. 685 Ma).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

TWEAK (TNF homologue with weak apoptosis-inducing activity) and Fn14 (fibroblast growth factor-inducible protein 14) are members of the tumor necrosis factor (TNF) ligand and receptor super-families. Having observed that Xenopus Fn14 cross-reacts with human TWEAK, despite its relatively low sequence homology to human Fn14, we examined the conservation in tertiary fold and binding interfaces between the two species. Our results, combining NMR solution structure determination, binding assays, extensive site-directed mutagenesis and molecular modeling, reveal that, in addition to the known and previously characterized β-hairpin motif, the helix-loop-helix motif makes an essential contribution to the receptor/ligand binding interface. We further discuss the insight provided by the structural analyses regarding how the cysteine-rich domains of the TNF receptor super-family may have evolved over time. DATABASE: Structural data are available in the Protein Data Bank/BioMagResBank databases under the accession codes 2KMZ, 2KN0 and 2KN1 and 17237, 17247 and 17252. STRUCTURED DIGITAL ABSTRACT: TWEAK binds to hFn14 by surface plasmon resonance (View interaction) xeFn14 binds to TWEAK by enzyme linked immunosorbent assay (View interaction) TWEAK binds to xeFn14 by surface plasmon resonance (View interaction) hFn14 binds to TWEAK by enzyme linked immunosorbent assay (View interaction).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen päätavoite on arvioida, ovatko neljä ohjelmistovaihtoehtoa riittäviä tuotannon aikataulutuksen työkaluja ja mikä työkaluista sopii toimeksiantajayritykselle. Alatavoitteena on kuvata tuotannon aikataulutuksen nyky- ja tahtotila prosessimallinnuksen avulla, selvittää työkalun käyttäjätarpeet ja määritellä priorisoidut valintakriteerit työkalulle.Tutkimuksen teoriaosuudessa tutkitaan tuotannon aikataulutuksen logiikkaa ja haasteita. Työssä tarkastellaan aikataulutusohjelmiston valintaa rinnakkain prosessinmallinnuksen kanssa. Aikataulutusohjelmistovaihtoehdot ja metodit käyttäjätarpeiden selvittämiseksi käydään läpi. Empiriaosuudessa selvitetään tutkimuksen suhde toimeksiantajayrityksen strategiaan. Käyttäjätarpeet selvitetään haastattelujen avulla jaanalysoidaan QFD matriisin avulla. Toimeksiantajayrityksen tuotannon aikataulutuksen nyky- ja tahtotilaprosessit mallinnetaan, jotta ohjelmistojen sopivuutta, aikataulutusprosessia tukevana työkaluna voidaan arvioida.Tutkimustuloksena ovatpriorisoidut valintakriteerit aikataulutustyökalulle eli käyttäjätarpeista johdetut tärkeimmät toiminnalliset ominaisuudet, järjestelmätoimittaja-arvio sekä suositukset jatkotoimenpiteistä ja lisätutkimuksesta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finland has large forest fuel resources. However, the use of forest fuels for energy production has been low, except for small-scale use in heating. According to national action plans and programs related to wood energy promotion, the utilization of such resources will be multiplied over the next few years. The most significant part of this growth will be based on the utilization of forest fuels, produced from logging residues of regeneration fellings, in industrial and municipal power and heating plants. Availability of logging residues was analyzed by means of resource and demand approaches in order to identify the most suitable regions with focus on increasing the forest fuel usage. The analysis included availability and supply cost comparisons between power plant sites and resource allocation in a least cost manner, and between a predefined power plant structure under demand and supply constraints. Spatial analysis of worksite factors and regional geographies were carried out using the GIS-model environment via geoprocessing and cartographic modeling tools. According to the results of analyses, the cost competitiveness of forest fuel supply should be improved in order to achieve the designed objectives in the near future. Availability and supply costs of forest fuels varied spatially and were very sensitive to worksite factors and transport distances. According to the site-specific analysis the supply potential between differentlocations can be multifold. However, due to technical and economical reasons ofthe fuel supply and dense power plant infrastructure, the supply potential is limited at plant level. Therefore, the potential and supply cost calculations aredepending on site-specific matters, where regional characteristics of resourcesand infrastructure should be taken into consideration, for example by using a GIS-modeling approach constructed in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Atlas Mountains in Morocco are considered as type examples of intracontinental chains, with high topography that contrasts with moderate crustal shortening and thickening. Whereas recent geological studies and geodynamic modeling have suggested the existence of dynamic topography to explain this apparent contradiction, there is a lack of modern geophysical data at the crustal scale to corroborate this hypothesis. Newly-acquired magnetotelluric data image the electrical resistivity distribution of the crust from the Middle Atlas to the Anti-Atlas, crossing the tabular Moulouya Plain and the High Atlas. All the units show different and unique electrical signatures throughout the crust reflecting the tectonic history of development of each one. In the upper crust electrical resistivity values may be associated to sediment sequences in the Moulouya and Anti-Atlas and to crustal scale fault systems in the High Atlas developed during the Cenozoic times. In the lower crust the low resistivity anomaly found below the Mouluya plain, together with other geophysical (low velocity anomaly, lack of earthquakes and minimum Bouguer anomaly) and geochemical (Neogene-Quaternary intraplate alkaline volcanic fields) evidence, infer the existence of a small degree of partial melt at the base of the lower crust. The low resistivity anomaly found below the Anti-Atlas may be associated with a relict subduction of Precambrian oceanic sediments, or to precipitated minerals during the release of fluids from the mantle during the accretion of the Anti-Atlas to the West African Supercontinent during the Panafrican orogeny ca. 685 Ma).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The partial least squares technique (PLS) has been touted as a viable alternative to latent variable structural equation modeling (SEM) for evaluating theoretical models in the differential psychology domain. We bring some balance to the discussion by reviewing the broader methodological literature to highlight: (1) the misleading characterization of PLS as an SEM method; (2) limitations of PLS for global model testing; (3) problems in testing the significance of path coefficients; (4) extremely high false positive rates when using empirical confidence intervals in conjunction with a new "sign change correction" for path coefficients; (5) misconceptions surrounding the supposedly superior ability of PLS to handle small sample sizes and non-normality; and (6) conceptual and statistical problems with formative measurement and the application of PLS to such models. Additionally, we also reanalyze the dataset provided by Willaby et al. (2015; doi:10.1016/j.paid.2014.09.008) to highlight the limitations of PLS. Our broader review and analysis of the available evidence makes it clear that PLS is not useful for statistical estimation and testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While the supply of water to dry or arid mountain regions has long been a major challenge, the on-going processes of climatic and socio-economic change currently affecting the hydrosystems of the Alps raise the spectre of renewed pressure on water resources and possible local shortages. In such a context, questions relating to fair distribution of water are all the more sensitive given the tendency to neglect the social dimension of sustainability. The present paper makes both a conceptual and empirical contribution to this debate by analysing a system of distribution that has a long experience of water scarcity management: the community governance models traditionally linked to the irrigation channels, or bisses, typical of the Swiss Alpine canton of Valais. More specifically, we evaluate these models in terms of accessibility and equity, characteristics that we use to operationalize the notion of 'fair distribution'. We examine these dimensions in three case studies with a view to highlighting the limitations of the aforementioned models. Indeed, despite their cooperative and endogenous nature, they tend to not only exclude certain members of the population, but also to reproduce rather than reduce social inequalities within the community. In general, these results challenge the rosy picture generally found in the literature relating to these community governance models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this book, I apply a philosophical approach to study the precautionary principle in environmental (and health) risk decision-making. The principle says that unacceptable environmental and health risks should be anticipated, and they ought to be forestalled before the damage comes to fruition even if scientific understanding of the risks is inadequate. The study consists of introductory chapters, summary and seven original publications which aim at explicating the principle, critically analysing the debate on the principle, and constructing a basis for the well-founded use of the principle. Papers I-V present the main thesis of this research. In the two last papers, the discussion is widened to new directions. The starting question is how well the currently embraced precautionary principle stands up to critical philosophical scrutiny. The approach employed is analytical: mainly conceptual, argumentative and ethical. The study draws upon Anglo-American style philosophy on the one hand, and upon sources of law as well as concrete cases and decision-making practices at the European Union level and in its member countries on the other. The framework is environmental (and health) risk governance, including the related law and policy. The main thesis of this study is that the debate on the precautionary principle needs to be shifted from the question of whether the principle (or its weak or strong interpretation) is well-grounded in general to questions about the theoretical plausibility and ethical and socio-political justifiability of specific understandings of the principle. The real picture of the precautionary principle is more complex than that found (i.e. presumed) in much of the current academic, political and public debate surrounding it. While certain presumptions and interpretations of the principle are found to be sound, others are theoretically flawed or include serious practical problems. The analysis discloses conceptual and ethical presumptions and elementary understandings of the precautionary principle, critically assesses current practices invoked in the name of the precautionary principle and public participation, and seeks to build bridges between precaution, engagement and philosophical ethics. Hence, it is intended to provide a sound basis upon which subsequent academic scrutiny can build.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis was to study network structures and modularity among biofuel heating system manufacturers in the Finnish bioenergy sector by utilizing the perspectives of numerous Finnish bioenergy specialists. The study is qualitative due to the fact that the research material was gathered with semi-structured theme interviews during May and June 2010. The research methodology used in the thesis combines conceptual and action-oriented approach. Networks, value nets, and modularity were studied from different perspectives. Three network and platform strategies were discovered and a general network structure was formed. Moreover, benefits and disadvantages of networks and modularity among biofuel heating system manufacturers were illustrated. The analysis provides a comprehensive perception of the industry. The results of the research were constructed by implementing existing theories into practice. Also future recommendations for the biofuel heating system manufacturers were given. The results can be considered to be beneficial because the number of previous studies about the subject is relatively small. The reliability of the study is eminent because the number of the interviews was inclusive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The developing energy markets and rising energy system costs have sparked the need to find new forms of energy production and increase the self-sufficiency of energy production. One alternative is gasification, whose principles have been known for decades, but it is only recently when the technology has become a true alternative. However, in order to meet the requirements of modern energy production methods, it is necessary to study the phenomenon thoroughly. In order to understand the gasification process better and optimize it from the viewpoint of ecology and energy efficiency, it is necessary to develop effective and reliable modeling tools for gasifiers. The main aims of this work have been to understand gasification as a process and furthermore to develop an existing three-dimensional circulating fluidized bed modeling tool for modeling of gasification. The model is applied to two gasification processes of 12 and 50 MWth. The results of modeling and measurements have been compared and subsequently reviewed. The work was done in co-operation with Lappeenranta University of Technology and Foster Wheeler Energia Oy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of precision agriculture and soil-landscape allows delimiting areas for localized management, allowing a localized application of agricultural inputs and thereby may contribute to preservation of natural resources. Therefore, the objective of this work was to characterize the spatial variability of chemical properties and clay content in the context of soil-landscape relationship in a Latosol (Oxisol) under cultivation of citrus. Soil samples were collected at a depth of 0.0-0.2 m in an area of 83.5 ha planted with citrus, as a 50-m intervals grid, with 129 points in concave terrain and 206 points in flat terrain, totaling 335 points. Values for the variables that express the chemical characteristics and clay content of soil properties were analyzed with descriptive statistics and geostatistical modeling of semivariograms for making maps of kriging. The values of range and kriging maps indicated higher variability in the shape of concave topography (top segment) compared with the shape of flat topography (slope and hillside segments below). The identification of different forms of terrain proved to be efficient in understanding the spatial variability of chemical properties and clay content of soil under cultivation of citrus.