936 resultados para vattenfri offset
Resumo:
In Estonia, illicit drug use hardly existed before the social changes of the 1990s when, as a result of economic and cultural transformations, the country became part of a world order centred in the West. On the one hand, this development is due to the spread of international youth culture, which many young people have perceived as being associated with drugs; on the other hand, it results from the marginalisation of a part of the population. The empirical part of the study is based mostly on in-depth interviews with different drug users conducted during between 1998 and 2002. Complementary material includes the results of participant observations, interviews with key experts, and the results of previous quantitative studies and statistics. The young people who started experimenting with illicit drugs from the 1990s and onwards perceived them as a part of an attractive lifestyle - a Western lifestyle, a point which is worth stressing in the case of Estonia. Although the reasons for initiation into drug use were similar for the majority of young people, their drug use habits and the impact of the drug use on their lives began to differ. I argue that the potential pleasure and harm which might accompany drug use is offset by the meanings attached to drugs and the sanctions and rituals regulating drug use. In the study both recreational and problem use have been analysed from different aspects in seven articles. I have investigated different types of drug users: new bohemians, cannabis users, in whose case partying and restrictive drug use is positively connected to their lives and goals within established society; stimulant-using party people for whom drugs are a means of having fun but who do not have the same restrictive norms regulating their drug use as the former and who may get into trouble under certain conditions; and heroin users for whom the drug rapidly progressed from a means of having fun to an obligation due to addiction. The research results point at the importance not only of the drug itself and the socio-economic situation of the user, but also of the cultural and social context within which the drug is used. The latter may on occasions be a crucial factor in whether or not initial drug use eventually leads to addiction.
Resumo:
A new performance metric, Peak-Error Ratio (PER) has been presented to benchmark the performance of a class of neuron circuits to realize neuron activation function (NAF) and its derivative (DNAF). Neuron circuits, biased in subthreshold region, based on the asymmetric cross-coupled differential pair configuration and conventional configuration of applying small external offset voltage at the input have been compared on the basis of PER. It is shown that the technique of using transistor asymmetry in a cross-coupled differential pair performs on-par with that of applying external offset voltage. The neuron circuits have been experimentally prototyped and characterized as a proof of concept on the 1.5 mu m AMI technology.
Resumo:
Nowadays any analysis of Russian economy is incomplete without taking into account the phenomenon of oligarchy. Russian oligarchs appeared after the fall of the Soviet Union and are represented by wealthy businessmen who control a huge part of natural resources enterprises and have a big political influence. Oligarchs’ shares in some natural resources industries reach even 70-80%. Their role in Russian economy is big without any doubts, however there has been very little economic analysis done. The aim of this work is to examine Russian oligarchy on micro and macro levels, its role in Russia’s transition and the possible positive and negative outcomes from this phenomenon. For this purpose the work presents two theoretical models. The first part of this thesis work examines the role of oligarchs on micro level, concentrating on the question whether the oligarchs can be more productive owners than other types of owners. To answer the question this part presents a model based on the article “Are oligarchs productive? Theory and evidence” by Y. Gorodnichenko and Y. Grygorenko. It is followed by empirical test based on the works of S. Guriev and A. Rachinsky. The model predicts oligarchs to invest more in the productivity of their enterprises and have higher returns on capital, therefore be more productive owners. According to the empirical test, oligarchs were found to outperform other types of owners, however it is not defined whether the productivity gains offset losses in tax revenue. The second part of the work concentrates on the role of oligarchy on macro level. More precisely, it examines the assumption that the depression after 1998 crises in Russia was caused by the oligarchs’ behavior. This part presents a theoretical model based on the article “A macroeconomic model of Russian transition: The role of oligarchic property rights” by S. Braguinsky and R. Myerson, where the special type of property rights is introduced. After the 1998 crises oligarchs started to invest all their resources abroad to protect themselves from political risks, which resulted in the long depression phase. The macroeconomic model shows, that better protection of property rights (smaller political risk) or/and higher outside investing could reduce the depression. Taking into account this result, the government policy can change the oligarchs’ behavior to be more beneficial for the Russian economy and make the transition faster.
Resumo:
Separated local field (SLF) spectroscopy is a powerful technique to measure heteronuclear dipolar couplings. The method provides site-specific dipolar couplings for oriented samples such as membrane proteins oriented in lipid bilayers and liquid crystals. A majority of the SLF techniques utilize the well-known Polarization Inversion Spin Exchange at Magic Angle (PISEMA) pulse scheme which employs spin exchange at the magic angle under Hartmann-Hahn match. Though PISEMA provides a relatively large scaling factor for the heteronuclear dipolar coupling and a better resolution along the dipolar dimension, it has a few shortcomings. One of the major problems with PISEMA is that the sequence is very much sensitive to proton carrier offset and the measured dipolar coupling changes dramatically with the change in the carrier frequency. The study presented here focuses on modified PISEMA sequences which are relatively insensitive to proton offsets over a large range. In the proposed sequences, the proton magnetization is cycled through two quadrants while the effective field is cycled through either two or four quadrants. The modified sequences have been named as 2(n)-SEMA where n represents the number of quadrants the effective field is cycled through. Experiments carried out on a liquid crystal and a single crystal of a model peptide demonstrate the usefulness of the modified sequences. A systematic study under various offsets and Hartmann-Hahn mismatch conditions has been carried out and the performance is compared with PISEMA under similar conditions.
Resumo:
Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.
Resumo:
16-electrode phantoms are developed and studied with a simple instrumentation developed for Electrical Impedance Tomography. An analog instrumentation is developed with a sinusoidal current generator and signal conditioner circuit. Current generator is developed withmodified Howland constant current source fed by a voltage controlled oscillator and the signal conditioner circuit consisting of an instrumentation amplifier and a narrow band pass filter. Electronic hardware is connected to the electrodes through a DIP switch based multiplexer module. Phantoms with different electrode size and position are developed and the EIT forward problem is studied using the forward solver. A low frequency low magnitude sinusoidal current is injected to the surface electrodes surrounding the phantom boundary and the differential potential is measured by a digital multimeter. Comparing measured potential with the simulated data it is intended to reduce the measurement error and an optimum phantom geometry is suggested. Result shows that the common mode electrode reduces the common mode error of the EIT electronics and reduces the error potential in the measured data. Differential potential is reduced up to 67 mV at the voltage electrode pair opposite to the current electrodes. Offset potential is measured and subtracted from the measured data for further correction. It is noticed that the potential data pattern depends on the electrode width and the optimum electrode width is suggested. It is also observed that measured potential becomes acceptable with a 20 mm solution column above and below the electrode array level.
Resumo:
The Clean Development Mechanism (CDM), Article 12 of the Kyoto Protocol allows Afforestation and Reforestation (A/R) projects as mitigation activities to offset the CO2 in the atmosphere whilst simultaneously seeking to ensure sustainable development for the host country. The Kyoto Protocol was ratified by the Government of India in August 2002 and one of India's objectives in acceding to the Protocol was to fulfil the prerequisites for implementation of projects under the CDM in accordance with national sustainable priorities. The objective of this paper is to assess the effectiveness of using large-scale forestry projects under the CDM in achieving its twin goals using Karnataka State as a case study. The Generalized Comprehensive Mitigation Assessment Process (GCOMAP) Model is used to observe the effect of varying carbon prices on the land available for A/R projects. The model is coupled with outputs from the Lund-Potsdam-Jena (LPJ) Dynamic Global Vegetation Model to incorporate the impacts of temperature rise due to climate change under the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) A2, A1B and B1. With rising temperatures and CO2, vegetation productivity is increased under A2 and A1B scenarios and reduced under B1. Results indicate that higher carbon price paths produce higher gains in carbon credits and accelerate the rate at which available land hits maximum capacity thus acting as either an incentive or disincentive for landowners to commit their lands to forestry mitigation projects. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Government of India has announced the Greening India Mission (GIM) under the National Climate Change Action Plan. The Mission aims to restore and afforest about 10 mha over the period 2010-2020 under different sub-missions covering moderately dense and open forests, scrub/grasslands, mangroves, wetlands, croplands and urban areas. Even though the main focus of the Mission is to address mitigation and adaptation aspects in the context of climate change, the adaptation component is inadequately addressed. There is a need for increased scientific input in the preparation of the Mission. The mitigation potential is estimated by simply multiplying global default biomass growth rate values and area. It is incomplete as it does not include all the carbon pools, phasing, differing growth rates, etc. The mitigation potential estimated using the Comprehensive Mitigation Analysis Process model for the GIM for the year 2020 has the potential to offset 6.4% of the projected national greenhouse gas emissions, compared to the GIM estimate of only 1.5%, excluding any emissions due to harvesting or disturbances. The selection of potential locations for different interventions and species choice under the GIM must be based on the use of modelling, remote sensing and field studies. The forest sector provides an opportunity to promote mitigation and adaptation synergy, which is not adequately addressed in the GIM. Since many of the interventions proposed are innovative and limited scientific knowledge exists, there is need for an unprecedented level of collaboration between the research institutions and the implementing agencies such as the Forest Departments, which is currently non-existent. The GIM could propel systematic research into forestry and climate change issues and thereby provide global leadership in this new and emerging science.
Resumo:
Nowadays any analysis of Russian economy is incomplete without taking into account the phenomenon of oligarchy. Russian oligarchs appeared after the fall of the Soviet Union and are represented by wealthy businessmen who control a huge part of natural resources enterprises and have a big political influence. Oligarchs’ shares in some natural resources industries reach even 70-80%. Their role in Russian economy is big without any doubts, however there has been very little economic analysis done. The aim of this work is to examine Russian oligarchy on micro and macro levels, its role in Russia’s transition and the possible positive and negative outcomes from this phenomenon. For this purpose the work presents two theoretical models. The first part of this thesis work examines the role of oligarchs on micro level, concentrating on the question whether the oligarchs can be more productive owners than other types of owners. To answer the question this part presents a model based on the article “Are oligarchs productive? Theory and evidence” by Y. Gorodnichenko and Y. Grygorenko. It is followed by empirical test based on the works of S. Guriev and A. Rachinsky. The model predicts oligarchs to invest more in the productivity of their enterprises and have higher returns on capital, therefore be more productive owners. According to the empirical test, oligarchs were found to outperform other types of owners, however it is not defined whether the productivity gains offset losses in tax revenue. The second part of the work concentrates on the role of oligarchy on macro level. More precisely, it examines the assumption that the depression after 1998 crises in Russia was caused by the oligarchs’ behavior. This part presents a theoretical model based on the article “A macroeconomic model of Russian transition: The role of oligarchic property rights” by S. Braguinsky and R. Myerson, where the special type of property rights is introduced. After the 1998 crises oligarchs started to invest all their resources abroad to protect themselves from political risks, which resulted in the long depression phase. The macroeconomic model shows, that better protection of property rights (smaller political risk) or/and higher outside investing could reduce the depression. Taking into account this result, the government policy can change the oligarchs’ behavior to be more beneficial for the Russian economy and make the transition faster.
Resumo:
Purpose - This paper aims to validate a comprehensive aeroelastic analysis for a helicopter rotor with the higher harmonic control aeroacoustic rotor test (HART-II) wind tunnel test data. Design/methodology/approach - Aeroelastic analysis of helicopter rotor with elastic blades based on finite element method in space and time and capable of considering higher harmonic control inputs is carried out. Moderate deflection and coriolis nonlinearities are included in the analysis. The rotor aerodynamics are represented using free wake and unsteady aerodynamic models. Findings - Good correlation between analysis and HART-II wind tunnel test data is obtained for blade natural frequencies across a range of rotating speeds. The basic physics of the blade mode shapes are also well captured. In particular, the fundamental flap, lag and torsion modes compare very well. The blade response compares well with HART-II result and other high-fidelity aeroelastic code predictions for flap and torsion mode. For the lead-lag response, the present analysis prediction is somewhat better than other aeroelastic analyses. Research limitations/implications - Predicted blade response trend with higher harmonic pitch control agreed well with the wind tunnel test data, but usually contained a constant offset in the mean values of lead-lag and elastic torsion response. Improvements in the modeling of the aerodynamic environment around the rotor can help reduce this gap between the experimental and numerical results. Practical implications - Correlation of predicted aeroelastic response with wind tunnel test data is a vital step towards validating any helicopter aeroelastic analysis. Such efforts lend confidence in using the numerical analysis to understand the actual physical behavior of the helicopter system. Also, validated numerical analyses can take the place of time-consuming and expensive wind tunnel tests during the initial stage of the design process. Originality/value - While the basic physics appears to be well captured by the aeroelastic analysis, there is need for improvement in the aerodynamic modeling which appears to be the source of the gap between numerical predictions and HART-II wind tunnel experiments.
Resumo:
The similar to 2500 km long Himalayan arc has experienced three large to great earthquakes of M-w 7.8 to 8.4 during the past century, but none produced surface rupture. Paleoseismic studies have been conducted during the last decade to begin understanding the timing, size, rupture extent, return period, and mechanics of the faulting associated with the occurrence of large surface rupturing earthquakes along the similar to 2500 km long Himalayan Frontal Thrust (HFT) system of India and Nepal. The previous studies have been limited to about nine sites along the western two-thirds of the HFT extending through northwest India and along the southern border of Nepal. We present here the results of paleoseismic investigations at three additional sites further to the northeast along the HFT within the Indian states of West Bengal and Assam. The three sites reside between the meizoseismal areas of the 1934 Bihar-Nepal and 1950 Assam earthquakes. The two westernmost of the sites, near the village of Chalsa and near the Nameri Tiger Preserve, show that offsets during the last surface rupture event were at minimum of about 14 m and 12 m, respectively. Limits on the ages of surface rupture at Chalsa (site A) and Nameri (site B), though broad, allow the possibility that the two sites record the same great historical rupture reported in Nepal around A.D. 1100. The correlation between the two sites is supported by the observation that the large displacements as recorded at Chalsa and Nameri would most likely be associated with rupture lengths of hundreds of kilometers or more and are on the same order as reported for a surface rupture earthquake reported in Nepal around A.D. 1100. Assuming the offsets observed at Chalsa and Nameri occurred synchronously with reported offsets in Nepal, the rupture length of the event would approach 700 to 800 km. The easternmost site is located within Harmutty Tea Estate (site C) at the edges of the 1950 Assam earthquake meizoseismal area. Here the most recent event offset is relatively much smaller (<2.5 m), and radiocarbon dating shows it to have occurred after A.D. 1100 (after about A.D. 1270). The location of the site near the edge of the meizoseismal region of the 1950 Assam earthquake and the relatively lesser offset allows speculation that the displacement records the 1950 M-w 8.4 Assam earthquake. Scatter in radiocarbon ages on detrital charcoal has not resulted in a firm bracket on the timing of events observed in the trenches. Nonetheless, the observations collected here, when taken together, suggest that the largest of thrust earthquakes along the Himalayan arc have rupture lengths and displacements of similar scale to the largest that have occurred historically along the world's subduction zones.
Resumo:
This paper considers the problem of spectrum sensing in cognitive radio networks when the primary user employs Orthogonal Frequency Division Multiplexing (OFDM). We specifically consider the scenario when the channel between the primary and a secondary user is frequency selective. We develop cooperative sequential detection algorithms based on energy detectors. We modify the detectors to mitigate the effects of some common model uncertainties such as timing and frequency offset, IQ-imbalance and uncertainty in noise and transmit power. The performance of the proposed algorithms are studied via simulations. We show that the performance of the energy detector is not affected by the frequency selective channel. We also provide a theoretical analysis for some of our algorithms.
Resumo:
The eigenvalue and eigenstructure assignment procedure has found application in a wide variety of control problems. In this paper a method for assigning eigenstructure to a linear time invariant multi-input system is proposed. The algorithm determines a matrix that has eigenvalues and eigenvectors at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenstructure. Solution of the matrix equation, involving unknown controller gams, open-loop system matrices, and desired eigenvalues and eigenvectors, results hi the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint can easily be overcome by a negligible shift in the values. Application of the procedure is illustrated through the offset control of a satellite supported, from an orbiting platform, by a flexible tether.
Resumo:
The eigenvalue and eigenstructure assignment procedure has found application in a wide variety of control problems. In this paper a method for assigning eigenstructure to a Linear time invariant multi-input system is proposed. The algorithm determines a matrix that has eigenvalues and eigenvectors at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenstructure. solution of the matrix equation, involving unknown controller gains, open-loop system matrices, and desired eigenvalues and eigenvectors, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint can easily be overcome by a negligible shift in the values. Application of the procedure is illustrated through the offset control of a satellite supported, from an orbiting platform, by a flexible tether,
Resumo:
Unsteady propagation of spherical flames, both inward and outward, are studied numerically extensively for single-step reaction and for different Lewis numbers of fuel/oxidizer. The dependence of flame speed ratio (s) and flame temperature ratio are obtained for a range of Lewis numbers and stretch (kappa) values. These results of s versus kappa show that the asymptotic theory by Frankel and Sivashinsky is reasonable for outward propagation. Other theories are unsatisfactory both quantitatively and qualitatively. The stretch effects are much higher for negative stretch than for positive stretch, as also seen in the theory of Frankel and Sivashinsky. The linearity of the flame speed ratio vs stretch relationship is restricted to nondimensional stretch of +/-0.1. It is shown further that the results from cylindrical flames are identical to the spherical flame on flame speed ratio versus nondimensional stretch plot thus confirming the generality of the concept of stretch. The comparison of the variation of (ds/dkappa)kappa=0 with beta(Lc - 1) show an offset between the computed and the asymptotic results of Matalon and Matkowsky. The departure of negative stretch results from this variation is significant. Several earlier experimental results are analysed and set out in the form of s versus kappa plot. Comparison of the results with experiments seem reasonable for negative stretch. The results for positive stretch are satisfactory qualitatively for a few cases. For rich propane-air, there are qualitative differences pointing to the need for full chemistry calculations in the extraction of stretch effects.