144 resultados para Interpreting geophysical logs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper an attempt has been made to evaluate the spatial variability of the depth of weathered and engineering bedrock in Bangalore, south India using Multichannel Analysis of Surface Wave (MASW) survey. One-dimensional MASW survey has been carried out at 58 locations and shear-wave velocities are measured. Using velocity profiles, the depth of weathered rock and engineering rock surface levels has been determined. Based on the literature, shear-wave velocity of 330 ± 30 m/s for weathered rock or soft rock and 760 ± 60 m/s for engineering rock or hard rock has been considered. Depths corresponding to these velocity ranges are evaluated with respect to ground contour levels and top surface levels have been mapped with an interpolation technique using natural neighborhood. The depth of weathered rock varies from 1 m to about 21 m. In 58 testing locations, only 42 locations reached the depths which have a shear-wave velocity of more than 760 ± 60 m/s. The depth of engineering rock is evaluated from these data and it varies from 1 m to about 50 m. Further, these rock depths have been compared with a subsurface profile obtained from a two-dimensional (2-D) MASW survey at 20 locations and a few selected available bore logs from the deep geotechnical boreholes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing concentrations of atmospheric CO2 decrease stomatal conductance of plants and thus suppress canopy transpiration. The climate response to this CO2-physiological forcing is investigated using the Community Atmosphere Model version 3.1 coupled to Community Land Model version 3.0. In response to the physiological effect of doubling CO2, simulations show a decrease in canopy transpiration of 8%, a mean warming of 0.1K over the land surface, and negligible changes in the hydrological cycle. These climate responses are much smaller than what were found in previous modeling studies. This is largely a result of unrealistic partitioning of evapotranspiration in our model control simulation with a greatly underestimated contribution from canopy transpiration and overestimated contributions from canopy and soil evaporation. This study highlights the importance of a realistic simulation of the hydrological cycle, especially the individual components of evapotranspiration, in reducing the uncertainty in our estimation of climatic response to CO2-physiological forcing. Citation: Cao, L., G. Bala, K. Caldeira, R. Nemani, and G.Ban-Weiss (2009), Climate response to physiological forcing of carbon dioxide simulated by the coupled Community Atmosphere Model (CAM3.1) and Community Land Model (CLM3.0).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Palghat–Cauvery suture zone in southern India separates Archaean crustal blocks to the north and the Proterozoic Madurai block to the south. Here we present the first detailed study of a partially retrogressed eclogite (from within the Sittampundi anorthositic complex in the suture zone) that occurs as a 20-cm wide layer in a garnet gabbro layer in anorthosite. The eclogite largely consists of an assemblage of coexisting porphyroblasts of almandine–pyrope garnet and augitic clinopyroxene. However, a few garnets contain inclusions of omphacite. Rims and symplectites composed of Na–Ca amphibole and plagioclase form a retrograde assemblage. Petrographic analysis and calculated phase equilibria indicate that garnet–omphacite–rutile–melt was the peak metamorphic assemblage and that it formed at ca. 20 kbar and above 1000 °C. The eclogite was exhumed on a very tight hairpin-type, anticlockwise P–T path, which we relate to subduction and exhumation in the Palghat–Cauvery suture zone. The REE composition of the minerals suggests a basaltic oceanic crustal protolith metamorphosed in a subduction regime. Geological–structural relations combined with geophysical data from the Palghat–Cauvery suture zone suggest that the eclogite facies metamorphism was related to formation of the suture zone. Closure of the Mozambique Ocean led to development of the suture zone and to its western extension in the Betsimisaraka suture of Madagascar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper presents identification and mapping of vulnerable and safe zones for liquefaction hazard. About 850 bore logs data collected from geotechnical investigation reports have been used to estimate the liquefaction factor of safety for Bangalore Mahanagara palike (BMP) area of about 220 km(2). Liquefaction factor of safety is arrived based on surface level peak ground acceleration presented by Anbazhagan and Sitharam(5) and liquefaction resistance, using corrected standard penetration test (SPT) N values. The estimated factor of safety against liquefaction is used to estimate liquefaction potential index and liquefaction severity index. These values are mapped using Geographical information system (GIS) to identify the vulnerable and safe zones in Bangalore. This study shows that more than 95% of the BMP area is safe against liquefaction potential. However the western part of the BMP is not safe against liquefaction, as it may be subjected to liquefaction with probability of 35 to 65%. Three approaches used in this study show that 1) mapping least factor of safety irrespective of depth may be used to find liquefiable area for worst case. 2) mapping liquefaction potential index can be used to assess the liquefaction severity of the area by considering layer thickness and factor of safety and 3) mapping of liquefaction severity index can be used to access the probability of liquefaction of area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The applicability of a formalism involving an exponential function of composition x1 in interpreting the thermodynamic properties of alloys has been studied. The excess integral and partial molar free energies of mixing are expressed as: $$\begin{gathered} \Delta F^{xs} = a_o x_1 (1 - x_1 )e^{bx_1 } \hfill \\ RTln\gamma _1 = a_o (1 - x_1 )^2 (1 + bx_1 )e^{bx_1 } \hfill \\ RTln\gamma _2 = a_o x_1^2 (1 - b + bx_1 )e^{bx_1 } \hfill \\ \end{gathered} $$ The equations are used in interpreting experimental data for several relatively weakly interacting binary systems. For the purpose of comparison, activity coefficients obtained by the subregular model and Krupkowski’s formalism have also been computed. The present equations may be considered to be convenient in describing the thermodynamic behavior of metallic solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is shown that Southwood's instability criterion for the onset of the Kelvin-Helmholtz instability at the magnetopause can be directly obtained from the marginal instability condition for the pure Alfven surface waves propagating along the interface between two incompressible media in the limit when the wave propagation direction is nearly perpendicular to the direction of the largest magnetic field. The phase velocity of the surface waves first excited at the onset of the instability depends on the angle between the interplanetary magnetic field and flow velocity in the solar wind in front of the bow shock.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A river basin that is extensively developed in the downstream reaches and that has a high potential for development in the upper reaches is considered for irrigation planning. A four-reservoir system is modeled on a monthly basis by using a mathematical programing (LP) formulation to find optimum cropping patterns, subject to land, water, and downstream release constraints. The model is applied to a fiver basin in India. Two objectives, maximizing net economic benefits and maximizing irrigated cropped area, considered in the model are analyzed in the context of multiobjective planning, and the tradeoffs are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

VHF nighttime scintillations, recorded during a high solar activity period at a meridian chain of stations covering a magnetic latitude belt of 3°–21°N (420 km subionospheric points) are analyzed to investigate the influence of equatorial spread F irregularities on the occurrence of scintillation at latitudes away from the equator. Observations show that saturated amplitude scintillations start abruptly about one and a half hours after ground sunset and their onset is almost simultaneous at stations whose subionospheric points are within 12°N latitude of the magnetic equator, but is delayed at a station whose subionospheric point is at 21°N magnetic latitude by 15 min to 4 hours. In addition, the occurrence of postsunset scintillations at all the stations is found to be conditional on their prior occurrence at the equatorial station. If no postsunset scintillation activity is seen at the equatorial station, no scintillations are seen at other stations also. The occurrence of scintillations is explained as caused by rising plasma bubbles and associated irregularities over the magnetic equator and the subsequent mapping of these irregularities down the magnetic field lines to the F region of higher latitudes through some instantaneous mechanism; and hence an equatorial control is established on the generation of postsunset scintillation-producing irregularities in the entire low-latitude belt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has long been thought that tropical rainfall retrievals from satellites have large errors. Here we show, using a new daily 1 degree gridded rainfall data set based on about 1800 gauges from the India Meteorology Department (IMD), that modern satellite estimates are reasonably close to observed rainfall over the Indian monsoon region. Daily satellite rainfalls from the Global Precipitation Climatology Project (GPCP 1DD) and the Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) are available since 1998. The high summer monsoon (June-September) rain over the Western Ghats and Himalayan foothills is captured in TMPA data. Away from hilly regions, the seasonal mean and intraseasonal variability of rainfall (averaged over regions of a few hundred kilometers linear dimension) from both satellite products are about 15% of observations. Satellite data generally underestimate both the mean and variability of rain, but the phase of intraseasonal variations is accurate. On synoptic timescales, TMPA gives reasonable depiction of the pattern and intensity of torrential rain from individual monsoon low-pressure systems and depressions. A pronounced biennial oscillation of seasonal total central India rain is seen in all three data sets, with GPCP 1DD being closest to IMD observations. The new satellite data are a promising resource for the study of tropical rainfall variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From the autocorrelation function of geomagnetic polarity intervals, it is shown that the field reversal intervals are not independent but form a process akin to the Markov process, where the random input to the model is itself a moving average process. The input to the moving average model is, however, an independent Gaussian random sequence. All the parameters in this model of the geomagnetic field reversal have been estimated. In physical terms this model implies that the mechanism of reversal possesses a memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new procedure for reducing trajectory sensitivity for the optimal linear regulator is described. The design is achieved without increase in the order of optimization and without the feedback of trajectory sensitivity. The procedure is also used in the input signal design problem for linear system identification by interpreting it as increasing trajectory sensitivity with respect to parameters to be estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.