100 resultados para Digital Elevation Models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The aim of this study was to establish a digital elevation model and its horizontal resolution to interpolate the annual air temperature for the Alagoas State by means of multiple linear regression models. A multiple linear regression model was adjusted to series (11 to 34 years) of annual air temperatures obtained from 28 weather stations in the states of Alagoas, Bahia, Pernambuco and Sergipe, in the Northeast of Brazil, in function of latitude, longitude and altitude. The elevation models SRTM and GTOPO30 were used in the analysis, with original resolutions of 90 and 900 m, respectively. The SRTM was resampled for horizontal resolutions of 125, 250, 500, 750 and 900 m. For spatializing the annual mean air temperature for the state of Alagoas, a multiple linear regression model was used for each elevation and spatial resolution on a grid of the latitude and longitude. In Alagoas, estimates based on SRTM data resulted in a standard error of estimate (0.57 degrees C) and dispersion (r(2) = 0.62) lower than those obtained from GTOPO30 (0.93 degrees C and 0.20). In terms of SRTM resolutions, no significant differences were observed between the standard error (0.55 degrees C; 750 m - 0.58 degrees C; 250m) and dispersion (0.60; 500 m - 0.65; 750 m) estimates. The spatialization of annual air temperature in Alagoas, via multiple regression models applied to SRTM data showed higher concordance than that obtained with the GTOPO30, independent of the spatial resolution.
Resumo:
Base-level maps (or ""isobase maps"", as originally defined by Filosofov, 1960), express a relationship between valley order and topography. The base-level map can be seen as a ""simplified"" version of the original topographic surface, from which the ""noise"" of the low-order stream erosion was removed. This method is able to identify areas with possible tectonic influence even within lithologically uniform domains. Base-level maps have been recently applied in semi-detail scale (e.g., 1:50 000 or larger) morphotectonic analysis. In this paper, we present an evaluation of the method's applicability in regional-scale analysis (e.g., 1:250 000 or smaller). A test area was selected in northern Brazil, at the lower course of the Araguaia and Tocantins rivers. The drainage network extracted from SRTM30_PLUS DEMs with spatial resolution of approximately 900 m was visually compared with available topographic maps and considered to be compatible with a 1:1,000 000 scale. Regarding the interpretation of regional-scale morphostructures, the map constructed with 2nd and 3rd-order valleys was considered to present the best results. Some of the interpreted base-level anomalies correspond to important shear zones and geological contacts present in the 1:5 000 000 Geological Map of South America. Others have no correspondence with mapped Precambrian structures and are considered to represent younger, probably neotectonic, features. A strong E-W orientation of the base-level lines over the inflexion of the Araguaia and Tocantins rivers, suggest a major drainage capture. A N-S topographic swath profile over the Tocantins and Araguaia rivers reveals a topographic pattern which, allied with seismic data showing a roughly N-S direction of extension in the area, lead us to interpret this lineament as an E-W, southward-dipping normal fault. There is also a good visual correspondence between the base-level lineaments and geophysical anomalies. A NW-SE lineament in the southeast of the study area partially corresponds to the northern border of the Mosquito lava field, of Jurassic age, and a NW-SE lineament traced in the northeastern sector of the study area can be interpreted as the Picos-Santa Ines lineament, identifiable in geophysical maps but with little expression in hypsometric or topographic maps.
Resumo:
Surface roughness is an important geomorphological variable which has been used in the Earth and planetary sciences to infer material properties, current/past processes, and the time elapsed since formation. No single definition exists; however, within the context of geomorphometry, we use surface roughness as an expression of the variability of a topographic surface at a given scale, where the scale of analysis is determined by the size of the landforms or geomorphic features of interest. Six techniques for the calculation of surface roughness were selected for an assessment of the parameter`s behavior at different spatial scales and data-set resolutions. Area ratio operated independently of scale, providing consistent results across spatial resolutions. Vector dispersion produced results with increasing roughness and homogenization of terrain at coarser resolutions and larger window sizes. Standard deviation of residual topography highlighted local features and did not detect regional relief. Standard deviation of elevation correctly identified breaks of slope and was good at detecting regional relief. Standard deviation of slope (SD(slope)) also correctly identified smooth sloping areas and breaks of slope, providing the best results for geomorphological analysis. Standard deviation of profile curvature identified the breaks of slope, although not as strongly as SD(slope), and it is sensitive to noise and spurious data. In general, SD(slope) offered good performance at a variety of scales, while the simplicity of calculation is perhaps its single greatest benefit.
Resumo:
The evolution of paleo-incised-valleys in the Sao Paulo State region of the southeastern Brazilian continental shelf is presented in this study in relation to the post Last Glacial Maximum (LGM) sea-level rises based on the submarine topography modeled by a detailed Digital Elevation Model and evidences noted in high resolution seismic profiles. The hypothesis that has guided this study is that the set of paleo-valley characteristics (i.e. the fluvial parameters of modern coastal drainage systems, the topographical shape and dimensions of the valleys and of the subsurface channels) may indicate aspects of the relation between the influence of the fluvial and the eustatic variation regime in geomorphological-stratigraphic registers. Models described in the literature sustain the view that faster marine transgressions tend to increase erosion in estuaries, which may explain the lack of registers of paleo-drainage both in topography and the sub-surface in areas with wider shelves. On the other hand, on narrower shelves, with a higher slope angle, the transgression process can preserve, or even enhance, the incised valley registers during shoreface retreat. In the area studied, we observed that the dimensions and form of the continental shelf varies from the northern to the southern part of the area, affecting aspects of the geomorphological registers of the submerged incised valleys.
Resumo:
Question: How can the coexistence of savanna and forest in Amazonian areas with relatively uniform climates be explained? Location: Eastern Marajo Island, northeast Amazonia, Brazil. Methods: The study integrated floristic analysis, terrain morphology, sedimentology and delta(13)C of soil organic matter. Floristic analysis involved rapid ecological assessment of 33 sites, determination of occurrence, specific richness, hierarchical distribution and matrix of floristic similarity between paired vegetation types. Terrain characterization was based on analysis of Landsat images using 4(R), 5(G) and 7(B) composition and digital elevation model (DEM). Sedimentology involved field descriptions of surface and core sediments. Finally, radiocarbon dating and analysis of delta(13)C of soil profile organic matter and natural ecotone forest-savanna was undertaken. Results: Slight tectonic subsidence in eastern Marajo Island favours seasonal flooding, making it unsuitable for forest growth. However, this area displays slightly convex-up, sinuous morphologies related to paleochannels, covered by forest. Terra-firme lowland forests are expanding from west to east, preferentially occupying paleochannels and replacing savanna. Slack, running water during channel abandonment leads to disappearance of varzea/gallery forest at channel margins. Long-abandoned channels sustain continuous terra-firme forests, because of longer times for more species to establish. Recently abandoned channels have had less time to become sites for widespread tree development, and are either not vegetated or covered by savanna. Conclusion: Landforms in eastern Marajo Island reflect changes in the physical environment due to reactivation of tectonic faults during the latest Quaternary. This promoted a dynamic history of channel abandonment, which controlled a set of interrelated parameters (soil type, topography, hydrology) that determined species location. Inclusion of a geological perspective for paleoenvironmental reconstruction can increase understanding of plant distribution in Amazonia.
Resumo:
Our objective was to develop a methodology to predict soil fertility using visible near-infrared (vis-NIR) diffuse reflectance spectra and terrain attributes derived from a digital elevation model (DEM). Specifically, our aims were to: (i) assemble a minimum data set to develop a soil fertility index for sugarcane (Sarcharum officinarum L.) (SFI-SC) for biofuel production in tropical soils; (ii) construct a model to predict the SFI-SC using soil vis-NIR spectra and terrain attributes; and (iii) produce a soil fertility map for our study area and assess it by comparing it with a green vegetation index (GVI). The study area was 185 ha located in sao Paulo State, Brazil. In total, 184 soil samples were collected and analyzed for a range of soil chemical and physical properties. Their vis-NIR spectra were collected from 400 to 2500 nm. The Shuttle Radar Topographic Mission 3-arcsec (90-m resolution) DEM of the area was used to derive 17 terrain attributes. A minimum data set of soil properties was selected to develop the SFI-SC. The SFI-SC consisted of three classes: Class 1, the highly fertile soils; Class 2, the fertile soils; and Class 3, the least fertile soils. It was derived heuristically with conditionals and using expert knowledge. The index was modeled with the spectra and terrain data using cross-validated decision trees. The cross-validation of the model correctly predicted Class 1 in 75% of cases, Class 2 in 61%, and Class 3 in 65%. A fertility map was derived for the study area and compared with a map of the GVI. Our approach offers a methodology that incorporates expert knowledge to derive the SFI-SC and uses a versatile spectro-spatial methodology that may be implemented for rapid and accurate determination of soil fertility and better exploration of areas suitable for production.
Resumo:
Brazil`s State of Sao Paulo Research Foundation
Resumo:
The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.
Resumo:
The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC) were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner) and 0.80 ± 0.19 (inter-examiner). The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05), although the differences were considered clinically insignificant (differences < 0.1 mm). The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Distribution of timing signals is an essential factor for the development of digital systems for telecommunication networks, integrated circuits and manufacturing automation. Originally, this distribution was implemented by using the master-slave architecture with a precise master clock generator sending signals to phase-locked loops (PLL) working as slave oscillators. Nowadays, wireless networks with dynamical connectivity and the increase in size and operation frequency of the integrated circuits suggest that the distribution of clock signals could be more efficient if mutually connected architectures were used. Here, mutually connected PLL networks are studied and conditions for synchronous states existence are analytically derived, depending on individual node parameters and network connectivity, considering that the nodes are nonlinear oscillators with nonlinear coupling conditions. An expression for the network synchronisation frequency is obtained. The lock-in range and the transmission error bounds are analysed providing hints to the design of this kind of clock distribution system.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The rise in boiling point of blackberry juice was experimentally measured at soluble solids concentrations in the range of 9.4 to 58.4Brix and pressures between 4.9 103 and 9.0 104 Pa (abs.). Different approaches to representing experimental data, including the Duhring`s rule, a model similar to Antoine equation and other empirical models proposed in the literature were tested. In the range of 9.4 to 33.6Brix, the rise in boiling point was nearly independent of pressure, varying only with juice concentration. Considerable deviations of this behavior began to occur at concentrations higher than 39.1Brix. Experimental data could be best predicted by adjusting an empirical model, which consists of a single equation that takes into account the dependence of rise in boiling point on pressure and concentration.
Resumo:
The classification of galaxies as star forming or active is generally done in the ([O III]/H beta, [N II]/H alpha) plane. The Sloan Digital Sky Survey (SDSS) has revealed that, in this plane, the distribution of galaxies looks like the two wings of a seagull. Galaxies in the right wing are referred to as Seyfert/LINERs, leading to the idea that non-stellar activity in galaxies is a very common phenomenon. Here, we argue that a large fraction of the systems in the right wing could actually be galaxies which stopped forming stars. The ionization in these `retired` galaxies would be produced by hot post-asymptotic giant branch stars and white dwarfs. Our argumentation is based on a stellar population analysis of the galaxies via our STARLIGHT code and on photoionization models using the Lyman continuum radiation predicted for this population. The proportion of LINER galaxies that can be explained in such a way is, however, uncertain. We further show how observational selection effects account for the shape of the right wing. Our study suggests that nuclear activity may not be as common as thought. If retired galaxies do explain a large part of the seagull`s right wing, some of the work concerning nuclear activity in galaxies, as inferred from SDSS data, will have to be revised.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.