785 resultados para GRIDS
Resumo:
This study uses the European Centre for Medium-Range Weather Forecasts (ECMWF) model-generated high-resolution 10-day-long predictions for the Year of Tropical Convection (YOTC) 2008. Precipitation forecast skills of the model over the tropics are evaluated against the Tropical Rainfall Measuring Mission (TRMM) estimates. It has been shown that the model was able to capture the monthly to seasonal mean features of tropical convection reasonably. Northward propagation of convective bands over the Bay of Bengal was also forecasted realistically up to 5 days in advance, including the onset phase of the monsoon during the first half of June 2008. However, large errors exist in the daily datasets especially for longer lead times over smaller domains. For shorter lead times (less than 4-5 days), forecast errors are much smaller over the oceans than over land. Moreover, the rate of increase of errors with lead time is rapid over the oceans and is confined to the regions where observed precipitation shows large day-to-day variability. It has been shown that this rapid growth of errors over the oceans is related to the spatial pattern of near-surface air temperature. This is probably due to the one-way air-sea interaction in the atmosphere-only model used for forecasting. While the prescribed surface temperature over the oceans remain realistic at shorter lead times, the pattern and hence the gradient of the surface temperature is not altered with change in atmospheric parameters at longer lead times. It has also been shown that the ECMWF model had considerable difficulties in forecasting very low and very heavy intensity of precipitation over South Asia. The model has too few grids with ``zero'' precipitation and heavy (>40 mm day(-1)) precipitation. On the other hand, drizzle-like precipitation is too frequent in the model compared to that in the TRMM datasets. Further analysis shows that a major source of error in the ECMWF precipitation forecasts is the diurnal cycle over the South Asian monsoon region. The peak intensity of precipitation in the model forecasts over land (ocean) appear about 6 (9) h earlier than that in the observations. Moreover, the amplitude of the diurnal cycle is much higher in the model forecasts compared to that in the TRMM estimates. It has been seen that the phase error of the diurnal cycle increases with forecast lead time. The error in monthly mean 3-hourly precipitation forecasts is about 2-4 times of the error in the daily mean datasets. Thus, effort should be given to improve the phase and amplitude forecast of the diurnal cycle of precipitation from the model.
Resumo:
Copper strips of 2.5 mm thickness resting on stainless steel anvils were normally indented by wedges under nominal plane strain conditions. Inflections in the hardness-penetration characteristics were identified. Inflections separate stages where each stage has typical mechanics of deformation. These are arrived at by studying the distortion of 0.125 mm spaced grids inscribed on the deformation plane of the strip. The sensitivity of hardness and deformation mechanics to wedge angle and the interfacial friction between strip and anvil were investigated within the framework of existing slip line field models of indentation of semi-infinite and finite blocks.
Resumo:
The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.
Resumo:
We make an assessment of the impact of projected climate change on forest ecosystems in India. This assessment is based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the dynamic global vegetation model IBIS for A2 and B2 scenarios. According to the model projections, 39% of forest grids are likely to undergo vegetation type change under the A2 scenario and 34% under the B2 scenario by the end of this century. However, in many forest dominant states such as Chattisgarh, Karnataka and Andhra Pradesh up to 73%, 67% and 62% of forested grids are projected to undergo change. Net Primary Productivity (NPP) is projected to increase by 68.8% and 51.2% under the A2 and B2 scenarios, respectively, and soil organic carbon (SOC) by 37.5% for A2 and 30.2% for B2 scenario. Based on the dynamic global vegetation modeling, we present a forest vulnerability index for India which is based on the observed datasets of forest density, forest biodiversity as well as model predicted vegetation type shift estimates for forested grids. The vulnerability index suggests that upper Himalayas, northern and central parts of Western Ghats and parts of central India are most vulnerable to projected impacts of climate change, while Northeastern forests are more resilient. Thus our study points to the need for developing and implementing adaptation strategies to reduce vulnerability of forests to projected climate change.
Resumo:
Climate change is projected to lead to shift of forest types leading to irreversible damage to forests by rendering several species extinct and potentially affecting the livelihoods of local communities and the economy. Approximately 47% and 42% of tropical dry deciduous grids are projected to undergo shifts under A2 and B2 SRES scenarios respectively, as opposed to less than 16% grids comprising of tropical wet evergreen forests. Similarly, the tropical thorny scrub forest is projected to undergo shifts in majority of forested grids under A2 (more than 80%) as well as B2 scenarios (50% of grids). Thus the forest managers and policymakers need to adapt to the ecological as well as the socio-economic impacts of climate change. This requires formulation of effective forest management policies and practices, incorporating climate concerns into long-term forest policy and management plans. India has formulated a large number of innovative and progressive forest policies but a mechanism to ensure effective implementation of these policies is needed. Additional policies and practices may be needed to address the impacts of climate change. This paper discusses an approach and steps involved in the development of an adaptation framework as well as policies, strategies and practices needed for mainstreaming adaptation to cope with projected climate change. Further, the existing barriers which may affect proactive adaptation planning given the scale, accuracy and uncertainty associated with assessing climate change impacts are presented.
Resumo:
In this study, we model the long-term effect of climate change on commercially important teak (Tectona grandis) and its productivity in India. This modelling assessment is based on climate projections of the regional climate model of the Hadley Center (HadRM3) and the dynamic vegetation model, IBIS. According to the model projections, 30% of teak grids in India are vulnerable to climate change under both A2 and B2 SRES scenarios because the future climate may not be optimal for teak at these grids. However, the net primary productivity and biomass are expected to increase because of elevated levels of CO2. Given these directions of likely impacts, it is crucial to further investigate the climate change impacts on teak and incorporate such findings into long-term teak plantation programs. This study also demonstrates the feasibility and limitations of assessing the impact of projected climate change at the species level in the tropics.
Resumo:
Shock wave reflection over a rotating circular cylinder is numerically and experimentally investigated. It is shown that the transition from the regular reflection to the Mach reflection is promoted on the cylinder surface which rotates in the same direction of the incident shock motion, whereas it is retarded on the surface that rotates to the reverse direction. Numerical calculations solving the Navier-Stokes equations using extremely fine grids also reveal that the reflected shock transition from RRdouble right arrowMR is either advanced or retarded depending on whether or not the surface motion favors the incident shock wave. The interpretation of viscous effects on the reflected shock transition is given by the dimensional analysis and from the viewpoint of signal propagation.
Resumo:
An assessment of the impact of projected climate change on forest ecosystems in India based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the global dynamic vegetation model IBIS for A1B scenario is conducted for short-term (2021-2050) and long-term (2071-2100) periods. Based on the dynamic global vegetation modelling, vulnerable forested regions of India have been identified to assist in planning adaptation interventions. The assessment of climate impacts showed that at the national level, about 45% of the forested grids is projected to undergo change. Vulnerability assessment showed that such vulnerable forested grids are spread across India. However, their concentration is higher in the upper Himalayan stretches, parts of Central India, northern Western Ghats and the Eastern Ghats. In contrast, the northeastern forests, southern Western Ghats and the forested regions of eastern India are estimated to be the least vulnerable. Low tree density, low biodiversity status as well as higher levels of fragmentation, in addition to climate change, contribute to the vulnerability of these forests. The mountainous forests (sub-alpine and alpine forest, the Himalayan dry temperate forest and the Himalayan moist temperate forest) are susceptible to the adverse effects of climate change. This is because climate change is predicted to be larger for regions that have greater elevations.
Resumo:
In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.
Resumo:
With the emergence of voltage scaling as one of the most powerful power reduction techniques, it has been important to support voltage scalable statistical static timing analysis (SSTA) in deep submicrometer process nodes. In this paper, we propose a single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load. The number of simulation programs with integrated circuit emphasis (SPICE) required to create this model over a large voltage and temperature range is found to be modest and 4x less than that required for a conventional table-based approach with comparable accuracy. We show how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature. Our experimentation on ISCAS 85 benchmarks across a voltage range of 0.9-1.1V shows that the average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.85%. The errors in predicting the 99% and 1% probability point are 1.31% and 1%, respectively, with respect to SPICE. The two potential applications of voltage-aware SSTA have been presented, i.e., one for improving the accuracy of timing analysis by considering instance-specific voltage drops in power grids and the other for determining optimum supply voltage for target yield for dynamic voltage scaling applications.
Resumo:
Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.
Resumo:
The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The Morse-Smale complex is a topological structure that captures the behavior of the gradient of a scalar function on a manifold. This paper discusses scalable techniques to compute the Morse-Smale complex of scalar functions defined on large three-dimensional structured grids. Computing the Morse-Smale complex of three-dimensional domains is challenging as compared to two-dimensional domains because of the non-trivial structure introduced by the two types of saddle criticalities. We present a parallel shared-memory algorithm to compute the Morse-Smale complex based on Forman's discrete Morse theory. The algorithm achieves scalability via synergistic use of the CPU and the GPU. We first prove that the discrete gradient on the domain can be computed independently for each cell and hence can be implemented on the GPU. Second, we describe a two-step graph traversal algorithm to compute the 1-saddle-2-saddle connections efficiently and in parallel on the CPU. Simultaneously, the extremasaddle connections are computed using a tree traversal algorithm on the GPU.
Resumo:
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6 degrees-38 degrees N and 68 degrees-98 degrees E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1 degrees x 0.1 degrees (approximately 10 x 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Resumo:
In view of the major advancement made in understanding the seismicity and seismotectonics of the Indian region in recent times, an updated probabilistic seismic hazard map of India covering 6-38 degrees N and 68-98 degrees E is prepared. This paper presents the results of probabilistic seismic hazard analysis of India done using regional seismic source zones and four well recognized attenuation relations considering varied tectonic provinces in the region. The study area was divided into small grids of size 0.1 degrees x 0.1 degrees. Peak Horizontal Acceleration (PHA) and spectral accelerations for periods 0.1 s and 1 s have been estimated and contour maps showing the spatial variation of the same are presented in the paper. The present study shows that the seismic hazard is moderate in peninsular shield, but the hazard in most parts of North and Northeast India is high. (C) 2012 Elsevier Ltd. All rights reserved.