952 resultados para national average


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic site classifications are used to represent site effects for estimating hazard parameters (response spectral ordinates) at the soil surface. Seismic site classifications have generally been carried out using average shear wave velocity and/or standard penetration test n-values of top 30-m soil layers, according to the recommendations of the National Earthquake Hazards Reduction Program (NEHRP) or the International Building Code (IBC). The site classification system in the NEHRP and the IBC is based on the studies carried out in the United States where soil layers extend up to several hundred meters before reaching any distinct soil-bedrock interface and may not be directly applicable to other regions, especially in regions having shallow geological deposits. This paper investigates the influence of rock depth on site classes based on the recommendations of the NEHRP and the IBC. For this study, soil sites having a wide range of average shear wave velocities (or standard penetration test n-values) have been collected from different parts of Australia, China, and India. Shear wave velocities of rock layers underneath soil layers have also been collected at depths from a few meters to 180 m. It is shown that a site classification system based on the top 30-m soil layers often represents stiffer site classes for soil sites having shallow rock depths (rock depths less than 25 m from the soil surface). A new site classification system based on average soil thickness up to engineering bedrock has been proposed herein, which is considered more representative for soil sites in shallow bedrock regions. It has been observed that response spectral ordinates, amplification factors, and site periods estimated using one-dimensional shear wave analysis considering the depth of engineering bedrock are different from those obtained considering top 30-m soil layers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose power allocation algorithms for increasing the sum rate of two and three user interference channels. The channels experience fast fading and there is an average power constraint on each transmitter. Our achievable strategies for two and three user interference channels are based on the classification of the interference into very strong, strong and weak interferences. We present numerical results of the power allocation algorithm for two user Gaussian interference channel with Rician fading with mean total power gain of the fade Omega = 3 and Rician factor kappa = 0.5 and compare the sum rate with that obtained from ergodic interference alignment with water-filling. We show that our power allocation algorithm increases the sum rate with a gain of 1.66dB at average transmit SNR of 5dB. For the three user Gaussian interference channel with Rayleigh fading with distribution CN(0, 0.5), we show that our power allocation algorithm improves the sum rate with a gain of 1.5dB at average transmit SNR of 5dB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a power optimization problem with average delay constraint on the downlink of a Green Base-station. A Green Base-station is powered by both renewable energy such as solar or wind energy as well as conventional sources like diesel generators or the power grid. We try to minimize the energy drawn from conventional energy sources and utilize the harvested energy to the maximum extent. Each user also has an average delay constraint for its data. The optimal action consists of scheduling the users and allocating the optimal transmission rate for the chosen user. In this paper, we formulate the problem as a Markov Decision Problem and show the existence of a stationary average-cost optimal policy. We also derive some structural results for the optimal policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sport hunting is often proposed as a tool to support the conservation of large carnivores. However, it is challenging to provide tangible economic benefits from this activity as an incentive for local people to conserve carnivores. We assessed economic gains from sport hunting and poaching of leopards (Panthera pardus), costs of leopard depredation of livestock, and attitudes of people toward leopards in Niassa National Reserve, Mozambique. We sent questionnaires to hunting concessionaires (n = 8) to investigate the economic value of and the relative importance of leopards relative to other key trophy-hunted species. We asked villagers (n = 158) the number of and prices for leopards poached in the reserve and the number of goats depredated by leopard. Leopards were the mainstay of the hunting industry; a single animal was worth approximately U.S.$24,000. Most safari revenues are retained at national and international levels, but poached leopard are illegally traded locally for small amounts ($83). Leopards depredated 11 goats over 2 years in 2 of 4 surveyed villages resulting in losses of $440 to 6 households. People in these households had negative attitudes toward leopards. Although leopard sport hunting generates larger gross revenues than poaching, illegal hunting provides higher economic benefits for households involved in the activity. Sport-hunting revenues did not compensate for the economic losses of livestock at the household level. On the basis of our results, we propose that poaching be reduced by increasing the costs of apprehension and that the economic benefits from leopard sport hunting be used to improve community livelihoods and provide incentives not to poach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the underlay mode of cognitive radio, secondary users can transmit when the primary is transmitting, but under tight interference constraints, which limit the secondary system performance. Antenna selection (AS)-based multiple antenna techniques, which require less hardware and yet exploit spatial diversity, help improve the secondary system performance. In this paper, we develop the optimal transmit AS rule that minimizes the symbol error probability (SEP) of an average interference-constrained secondary system that operates in the underlay mode. We show that the optimal rule is a non-linear function of the power gains of the channels from secondary transmit antenna to primary receiver and secondary transmit antenna to secondary receive antenna. The optimal rule is different from the several ad hoc rules that have been proposed in the literature. We also propose a closed-form, tractable variant of the optimal rule and analyze its SEP. Several results are presented to compare the performance of the closed-form rule with the ad hoc rules, and interesting inter-relationships among them are brought out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a computationally efficient model for a dc-dc boost converter, which is valid for continuous and discontinuous conduction modes; the model also incorporates significant non-idealities of the converter. Simulation of the dc-dc boost converter using an average model provides practically all the details, which are available from the simulation using the switching (instantaneous) model, except for the quantum of ripple in currents and voltages. A harmonic model of the converter can be used to evaluate the ripple quantities. This paper proposes a combined (average-cum-harmonic) model of the boost converter. The accuracy of the combined model is validated through extensive simulations and experiments. A quantitative comparison of the computation times of the average, combined and switching models are presented. The combined model is shown to be more computationally efficient than the switching model for simulation of transient and steady-state responses of the converter under various conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a comparative evaluation of the average and switching models of a dc-dc boost converter from the point of view of real-time simulation. Both the models are used to simulate the converter in real-time on a Field Programmable Gate Array (FPGA) platform. The converter is considered to function over a wide range of operating conditions, and could do transition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM). While the average model is known to be computationally efficient from the perspective of off-line simulation, the same is shown here to consume more logical resources than the switching model for real-time simulation of the dc-dc converter. Further, evaluation of the boundary condition between CCM and DCM is found to be the main reason for the increased consumption of resources by the average model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na0.5Bi0.5TiO3 (NBT) and its derivatives have prompted a great surge in interest owing to their potential as lead-free piezoelectrics. In spite of five decades since its discovery, there is still a lack of clarity on crucial issues such as the origin of significant dielectric relaxation at room temperature, structural factors influencing its depoling, and the status of the recently proposed monoclinic (Cc) structure vis-a-vis the nanosized structural heterogeneities. In this work, these issues are resolved by comparative analysis of local and global structures on poled and unpoled NBT specimens using electron, x-ray, and neutron diffraction in conjunction with first-principles calculation, dielectric, ferroelectric, and piezoelectric measurements. The reported global monoclinic (Cc) distortion is shown not to correspond to the thermodynamic equilibrium state at room temperature. The global monocliniclike appearance rather owes its origin to the presence of local structural and strain heterogeneities. Poling removes the structural inhomogeneities and establishes a long-range rhombohedral distortion. In the process the system gets irreversibly transformed from a nonergodic relaxor to a normal ferroelectric state. The thermal depoling is shown to be associated with the onset of incompatible in-phase tilted octahedral regions in the field-stabilized long range rhombohedral distortion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a discrete time partially observable zero-sum stochastic game with average payoff criterion. We study the game using an equivalent completely observable game. We show that the game has a value and also we present a pair of optimal strategies for both the players.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background & objectives: Pre-clinical toxicology evaluation of biotechnology products is a challenge to the toxicologist. The present investigation is an attempt to evaluate the safety profile of the first indigenously developed recombinant DNA anti-rabies vaccine DRV (100 mu g)] and combination rabies vaccine CRV (100 mu g DRV and 1.25 IU of cell culture-derived inactivated rabies virus vaccine)], which are intended for clinical use by intramuscular route in Rhesus monkeys. Methods: As per the regulatory requirements, the study was designed for acute (single dose - 14 days), sub-chronic (repeat dose - 28 days) and chronic (intended clinical dose - 120 days) toxicity tests using three dose levels, viz. therapeutic, average (2x therapeutic dose) and highest dose (10 x therapeutic dose) exposure in monkeys. The selection of the model i.e. monkey was based on affinity and rapid higher antibody response during the efficacy studies. An attempt was made to evaluate all parameters which included physical, physiological, clinical, haematological and histopathological profiles of all target organs, as well as Tiers I, II, III immunotoxicity parameters. Results: In acute toxicity there was no mortality in spite of exposing the monkeys to 10XDRV. In sub chronic and chronic toxicity studies there were no abnormalities in physical, physiological, neurological, clinical parameters, after administration of test compound in intended and 10 times of clinical dosage schedule of DRV and CRV under the experimental conditions. Clinical chemistry, haematology, organ weights and histopathology studies were essentially unremarkable except the presence of residual DNA in femtogram level at site of injection in animal which received 10X DRV in chronic toxicity study. No Observational Adverse Effects Level (NOAEL) of DRV is 1000 ug/dose (10 times of therapeutic dose) if administered on 0, 4, 7, 14, 28th day. Interpretation & conclusions: The information generated by this study not only draws attention to the need for national and international regulatory agencies in formulating guidelines for pre-clinical safety evaluation of biotech products but also facilitates the development of biopharmaceuticals as safe potential therapeutic agents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage source inverters are an integral part of renewable power sources and smart grid systems. Computationally efficient and fairly accurate models for the voltage source inverter are required to carry out extensive simulation studies on complex power networks. Accuracy requires that the effect of dead-time be incorporated in the inverter model. The dead-time is essentially a short delay introduced between the gating pulses to the complementary switches in an inverter leg for the safety of power devices. As the modern voltage source inverters switch at fairly high frequencies, the dead-time significantly influences the output fundamental voltage. Dead-time also causes low-frequency harmonic distortion and is hence important from a power quality perspective. This paper studies the dead-time effect in a synchronous dq reference frame, since dynamic studies and controller design are typically carried out in this frame of reference. For the sake of computational efficiency, average models are derived, incorporating the dead-time effect, in both RYB and dq reference frames. The average models are shown to consume less computation time than their corresponding switching models, the accuracies of the models being comparable. The proposed average synchronous reference frame model, including effect of dead-time, is validated through experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.