970 resultados para Offset printing
Resumo:
Preferential cleavage of active genes by DNase I has been correlated with a structurally altered conformation of DNA at the hypersensitive site in chromatin. To have a better understanding of the structural requirements for gene activation as probed by DNase I action, digestability by DNase I of synthetic polynucleotides having the ability to adopt B and non-B conformation (like Z-form) was studied which indicated a marked higher digestability of the B-form of DNA. Left handed Z form present within a natural sequence in supercoiled plasmid also showed marked resistance towards DNase I digestion. We show that alternating purine-pyrimidine sequences adopting Z-conformation exhibit DNAse I foot printing even in a protein free system. The logical deductions from the results indicate that 1) altered structure like Z-DNA is not a favourable substrate for DNase I, 2) both the ends of the alternating purine-pyrimidine insert showed hypersensitivity, 3) B-form with a minor groove of 12-13 A is a more favourable substrate for DNase I than an altered structure, 4) any structure of DNA deviating largely from B form with a capacity to flip over to the B-form are potential targets for the DNase I enzymic probes in naked DNA.
Resumo:
The Government of India has announced the Greening India Mission (GIM) under the National Climate Change Action Plan. The Mission aims to restore and afforest about 10 mha over the period 2010-2020 under different sub-missions covering moderately dense and open forests, scrub/grasslands, mangroves, wetlands, croplands and urban areas. Even though the main focus of the Mission is to address mitigation and adaptation aspects in the context of climate change, the adaptation component is inadequately addressed. There is a need for increased scientific input in the preparation of the Mission. The mitigation potential is estimated by simply multiplying global default biomass growth rate values and area. It is incomplete as it does not include all the carbon pools, phasing, differing growth rates, etc. The mitigation potential estimated using the Comprehensive Mitigation Analysis Process model for the GIM for the year 2020 has the potential to offset 6.4% of the projected national greenhouse gas emissions, compared to the GIM estimate of only 1.5%, excluding any emissions due to harvesting or disturbances. The selection of potential locations for different interventions and species choice under the GIM must be based on the use of modelling, remote sensing and field studies. The forest sector provides an opportunity to promote mitigation and adaptation synergy, which is not adequately addressed in the GIM. Since many of the interventions proposed are innovative and limited scientific knowledge exists, there is need for an unprecedented level of collaboration between the research institutions and the implementing agencies such as the Forest Departments, which is currently non-existent. The GIM could propel systematic research into forestry and climate change issues and thereby provide global leadership in this new and emerging science.
Resumo:
Nowadays any analysis of Russian economy is incomplete without taking into account the phenomenon of oligarchy. Russian oligarchs appeared after the fall of the Soviet Union and are represented by wealthy businessmen who control a huge part of natural resources enterprises and have a big political influence. Oligarchs’ shares in some natural resources industries reach even 70-80%. Their role in Russian economy is big without any doubts, however there has been very little economic analysis done. The aim of this work is to examine Russian oligarchy on micro and macro levels, its role in Russia’s transition and the possible positive and negative outcomes from this phenomenon. For this purpose the work presents two theoretical models. The first part of this thesis work examines the role of oligarchs on micro level, concentrating on the question whether the oligarchs can be more productive owners than other types of owners. To answer the question this part presents a model based on the article “Are oligarchs productive? Theory and evidence” by Y. Gorodnichenko and Y. Grygorenko. It is followed by empirical test based on the works of S. Guriev and A. Rachinsky. The model predicts oligarchs to invest more in the productivity of their enterprises and have higher returns on capital, therefore be more productive owners. According to the empirical test, oligarchs were found to outperform other types of owners, however it is not defined whether the productivity gains offset losses in tax revenue. The second part of the work concentrates on the role of oligarchy on macro level. More precisely, it examines the assumption that the depression after 1998 crises in Russia was caused by the oligarchs’ behavior. This part presents a theoretical model based on the article “A macroeconomic model of Russian transition: The role of oligarchic property rights” by S. Braguinsky and R. Myerson, where the special type of property rights is introduced. After the 1998 crises oligarchs started to invest all their resources abroad to protect themselves from political risks, which resulted in the long depression phase. The macroeconomic model shows, that better protection of property rights (smaller political risk) or/and higher outside investing could reduce the depression. Taking into account this result, the government policy can change the oligarchs’ behavior to be more beneficial for the Russian economy and make the transition faster.
Resumo:
Purpose - This paper aims to validate a comprehensive aeroelastic analysis for a helicopter rotor with the higher harmonic control aeroacoustic rotor test (HART-II) wind tunnel test data. Design/methodology/approach - Aeroelastic analysis of helicopter rotor with elastic blades based on finite element method in space and time and capable of considering higher harmonic control inputs is carried out. Moderate deflection and coriolis nonlinearities are included in the analysis. The rotor aerodynamics are represented using free wake and unsteady aerodynamic models. Findings - Good correlation between analysis and HART-II wind tunnel test data is obtained for blade natural frequencies across a range of rotating speeds. The basic physics of the blade mode shapes are also well captured. In particular, the fundamental flap, lag and torsion modes compare very well. The blade response compares well with HART-II result and other high-fidelity aeroelastic code predictions for flap and torsion mode. For the lead-lag response, the present analysis prediction is somewhat better than other aeroelastic analyses. Research limitations/implications - Predicted blade response trend with higher harmonic pitch control agreed well with the wind tunnel test data, but usually contained a constant offset in the mean values of lead-lag and elastic torsion response. Improvements in the modeling of the aerodynamic environment around the rotor can help reduce this gap between the experimental and numerical results. Practical implications - Correlation of predicted aeroelastic response with wind tunnel test data is a vital step towards validating any helicopter aeroelastic analysis. Such efforts lend confidence in using the numerical analysis to understand the actual physical behavior of the helicopter system. Also, validated numerical analyses can take the place of time-consuming and expensive wind tunnel tests during the initial stage of the design process. Originality/value - While the basic physics appears to be well captured by the aeroelastic analysis, there is need for improvement in the aerodynamic modeling which appears to be the source of the gap between numerical predictions and HART-II wind tunnel experiments.
Resumo:
This study is a pragmatic description of the evolution of the genre of English witchcraft pamphlets from the mid-sixteenth century to the end of the seventeenth century. Witchcraft pamphlets were produced for a new kind of readership semi-literate, uneducated masses and the central hypothesis of this study is that publishing for the masses entailed rethinking the ways of writing and printing texts. Analysis of the use of typographical variation and illustrations indicates how printers and publishers catered to the tastes and expectations of this new audience. Analysis of the language of witchcraft pamphlets shows how pamphlet writers took into account the new readership by transforming formal written source materials trial proceedings into more immediate ways of writing. The material for this study comes from the Corpus of Early Modern English Witchcraft Pamphlets, which has been compiled by the author. The multidisciplinary analysis incorporates both visual and linguistic aspects of the texts, with methodologies and theoretical insights adopted eclectically from historical pragmatics, genre studies, book history, corpus linguistics, systemic functional linguistics and cognitive psychology. The findings are anchored in the socio-historical context of early modern publishing, reading, literacy and witchcraft beliefs. The study shows not only how consideration of a new audience by both authors and printers influenced the development of a genre, but also the value of combining visual and linguistic features in pragmatic analyses of texts.
Resumo:
Thermonuclear fusion is a sustainable energy solution, in which energy is produced using similar processes as in the sun. In this technology hydrogen isotopes are fused to gain energy and consequently to produce electricity. In a fusion reactor hydrogen isotopes are confined by magnetic fields as ionized gas, the plasma. Since the core plasma is millions of degrees hot, there are special needs for the plasma-facing materials. Moreover, in the plasma the fusion of hydrogen isotopes leads to the production of high energetic neutrons which sets demanding abilities for the structural materials of the reactor. This thesis investigates the irradiation response of materials to be used in future fusion reactors. Interactions of the plasma with the reactor wall leads to the removal of surface atoms, migration of them, and formation of co-deposited layers such as tungsten carbide. Sputtering of tungsten carbide and deuterium trapping in tungsten carbide was investigated in this thesis. As the second topic the primary interaction of the neutrons in the structural material steel was examined. As model materials for steel iron chromium and iron nickel were used. This study was performed theoretically by the means of computer simulations on the atomic level. In contrast to previous studies in the field, in which simulations were limited to pure elements, in this work more complex materials were used, i.e. they were multi-elemental including two or more atom species. The results of this thesis are in the microscale. One of the results is a catalogue of atom species, which were removed from tungsten carbide by the plasma. Another result is e.g. the atomic distributions of defects in iron chromium caused by the energetic neutrons. These microscopic results are used in data bases for multiscale modelling of fusion reactor materials, which has the aim to explain the macroscopic degradation in the materials. This thesis is therefore a relevant contribution to investigate the connection of microscopic and macroscopic radiation effects, which is one objective in fusion reactor materials research.
Resumo:
Atomic layer deposition (ALD) is a method to deposit thin films from gaseous precursors to the substrate layer-by-layer so that the film thickness can be tailored with atomic layer accuracy. Film tailoring is even further emphasized with selective-area ALD which enables the film growth to be controlled also on the substrate surface. Selective-area ALD allows the decrease of a process steps in preparing thin film devices. This can be of a great technological importance when the ALD films become into wider use in different applications. Selective-area ALD can be achieved by passivation or activation of a surface. In this work ALD growth was prevented by octadecyltrimethoxysilane, octadecyltrichlorosilane and 1-dodecanethiol SAMs, and by PMMA (polymethyl methacrylate) and PVP (poly(vinyl pyrrolidone) polymer films. SAMs were prepared from vapor phase and by microcontact printing, and polymer films were spin coated. Microcontact printing created patterned SAMs at once. The SAMs prepared from vapor phase and the polymer mask layers were patterned by UV lithography or lift-off process so that after preparation of a continuous mask layer selected areas of them were removed. On these areas the ALD film was deposited selectively. SAMs and polymer films prevented the growth in several ALD processes such as iridium, ruthenium, platinum, TiO2 and polyimide so that the ALD films did grow only on areas without SAM or polymer mask layer. PMMA and PVP films also protected the surface against Al2O3 and ZrO2 growth. Activation of the surface for ALD of ruthenium was achieved by preparing a RuOX layer by microcontact printing. At low temperatures the RuCp2-O2 process nucleated only on this oxidative activation layer but not on bare silicon.
Resumo:
This study investigates the process of producing interactivity in a converged media environment. The study asks whether more media convergence equals more interactivity. The research object is approached through semi-structured interviews of prominent decision makers within the Finnish media. The main focus of the study are the three big ones of the traditional media, radio, television and the printing press, and their ability to adapt to the changing environment. The study develops theoretical models for the analysis of interactive features and convergence. Case-studies are formed from the interview data and they are evaluated against the models. As a result the cases arc plotted and compared on a four-fold table. The cases are Radio Rock, NRJ, Biu Brother, Television Chat, Olivia and Sanoma News. It is found out that the theoretical models can accurately forecast the results of the case studies. The models are also able to distinguish different aspects of both interactivity and convergence so that a case, which at a first glance seems not to be very interactive is in the end found out to receive second highest scores on the analysis. The highest scores are received by Big Brother and Sanoma News. Through the theory and the analysis of the research data it is found out that the concepts of interactivity and convergence arc intimately intertwined and very hard in many cases to separate from each other. Hence the answer to the main question of this study is yes, convergence does promote interactivity and audience participation. The main theoretical background for the analysis of interactivity follows the work of Came Fleeter, Spiro Kiousis and Sally McMillan. Heeler's six-dimensional definition of interactivity is used as the basis for operationalizing interactivity. The actor-network theory is used as the main theoretical framework to analyze convergence. The definition and operationalization of the actor-network theory into a model of convergence follows the work of Michel Callon. Bruno Latour and especially John Law and Felix Stalder.
Resumo:
The similar to 2500 km long Himalayan arc has experienced three large to great earthquakes of M-w 7.8 to 8.4 during the past century, but none produced surface rupture. Paleoseismic studies have been conducted during the last decade to begin understanding the timing, size, rupture extent, return period, and mechanics of the faulting associated with the occurrence of large surface rupturing earthquakes along the similar to 2500 km long Himalayan Frontal Thrust (HFT) system of India and Nepal. The previous studies have been limited to about nine sites along the western two-thirds of the HFT extending through northwest India and along the southern border of Nepal. We present here the results of paleoseismic investigations at three additional sites further to the northeast along the HFT within the Indian states of West Bengal and Assam. The three sites reside between the meizoseismal areas of the 1934 Bihar-Nepal and 1950 Assam earthquakes. The two westernmost of the sites, near the village of Chalsa and near the Nameri Tiger Preserve, show that offsets during the last surface rupture event were at minimum of about 14 m and 12 m, respectively. Limits on the ages of surface rupture at Chalsa (site A) and Nameri (site B), though broad, allow the possibility that the two sites record the same great historical rupture reported in Nepal around A.D. 1100. The correlation between the two sites is supported by the observation that the large displacements as recorded at Chalsa and Nameri would most likely be associated with rupture lengths of hundreds of kilometers or more and are on the same order as reported for a surface rupture earthquake reported in Nepal around A.D. 1100. Assuming the offsets observed at Chalsa and Nameri occurred synchronously with reported offsets in Nepal, the rupture length of the event would approach 700 to 800 km. The easternmost site is located within Harmutty Tea Estate (site C) at the edges of the 1950 Assam earthquake meizoseismal area. Here the most recent event offset is relatively much smaller (<2.5 m), and radiocarbon dating shows it to have occurred after A.D. 1100 (after about A.D. 1270). The location of the site near the edge of the meizoseismal region of the 1950 Assam earthquake and the relatively lesser offset allows speculation that the displacement records the 1950 M-w 8.4 Assam earthquake. Scatter in radiocarbon ages on detrital charcoal has not resulted in a firm bracket on the timing of events observed in the trenches. Nonetheless, the observations collected here, when taken together, suggest that the largest of thrust earthquakes along the Himalayan arc have rupture lengths and displacements of similar scale to the largest that have occurred historically along the world's subduction zones.
Resumo:
This paper considers the problem of spectrum sensing in cognitive radio networks when the primary user employs Orthogonal Frequency Division Multiplexing (OFDM). We specifically consider the scenario when the channel between the primary and a secondary user is frequency selective. We develop cooperative sequential detection algorithms based on energy detectors. We modify the detectors to mitigate the effects of some common model uncertainties such as timing and frequency offset, IQ-imbalance and uncertainty in noise and transmit power. The performance of the proposed algorithms are studied via simulations. We show that the performance of the energy detector is not affected by the frequency selective channel. We also provide a theoretical analysis for some of our algorithms.
Resumo:
The eigenvalue and eigenstructure assignment procedure has found application in a wide variety of control problems. In this paper a method for assigning eigenstructure to a linear time invariant multi-input system is proposed. The algorithm determines a matrix that has eigenvalues and eigenvectors at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenstructure. Solution of the matrix equation, involving unknown controller gams, open-loop system matrices, and desired eigenvalues and eigenvectors, results hi the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint can easily be overcome by a negligible shift in the values. Application of the procedure is illustrated through the offset control of a satellite supported, from an orbiting platform, by a flexible tether.
Resumo:
The eigenvalue and eigenstructure assignment procedure has found application in a wide variety of control problems. In this paper a method for assigning eigenstructure to a Linear time invariant multi-input system is proposed. The algorithm determines a matrix that has eigenvalues and eigenvectors at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenstructure. solution of the matrix equation, involving unknown controller gains, open-loop system matrices, and desired eigenvalues and eigenvectors, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint can easily be overcome by a negligible shift in the values. Application of the procedure is illustrated through the offset control of a satellite supported, from an orbiting platform, by a flexible tether,
Resumo:
Unsteady propagation of spherical flames, both inward and outward, are studied numerically extensively for single-step reaction and for different Lewis numbers of fuel/oxidizer. The dependence of flame speed ratio (s) and flame temperature ratio are obtained for a range of Lewis numbers and stretch (kappa) values. These results of s versus kappa show that the asymptotic theory by Frankel and Sivashinsky is reasonable for outward propagation. Other theories are unsatisfactory both quantitatively and qualitatively. The stretch effects are much higher for negative stretch than for positive stretch, as also seen in the theory of Frankel and Sivashinsky. The linearity of the flame speed ratio vs stretch relationship is restricted to nondimensional stretch of +/-0.1. It is shown further that the results from cylindrical flames are identical to the spherical flame on flame speed ratio versus nondimensional stretch plot thus confirming the generality of the concept of stretch. The comparison of the variation of (ds/dkappa)kappa=0 with beta(Lc - 1) show an offset between the computed and the asymptotic results of Matalon and Matkowsky. The departure of negative stretch results from this variation is significant. Several earlier experimental results are analysed and set out in the form of s versus kappa plot. Comparison of the results with experiments seem reasonable for negative stretch. The results for positive stretch are satisfactory qualitatively for a few cases. For rich propane-air, there are qualitative differences pointing to the need for full chemistry calculations in the extraction of stretch effects.
Resumo:
A specific protein exhibiting immunological cross-reactivity with chicken riboflavin carrier protein has been purified to homogeneity from human amniotic fluid by use of ion-exchange and affinity chromatography. The protein is similar to its avian counterpart in terms of molecular size, distribution of 125I-labelled tryptic peptides during finger printing, and preferential binding to riboflavin. Immunologically, they are homologous since most of the monoclonal antibodies raised against the avian protein cross-react with the purified human vitamin carrier.
Resumo:
Prohibitive test time, nonuniformity of excitation, and signal nonlinearity are major concerns associated with employing dc, sine, and triangular/ramp signals, respectively, while determining static nonlinearity of analog-to-digital converters (ADCs) with high resolution (i.e., ten or more bits). Attempts to overcome these issues have been examined with some degree of success. This paper describes a novel method of estimating the ``true'' static nonlinearity of an ADC using a low-frequency sine signal (for example, less than 10 Hz) by employing the histogram-based approach. It is based on the well-known fact that the variation of a sine signal is ``reasonably linear'' when the angle is small, for example, in the range of +/- 5 degrees to +/- 7 degrees. In the proposed method, the ADC under test has to be ``fed'' with this ``linear'' portion of the sinewave. The presence of any harmonics and offset in input excitation makes this linear part of the sine signal marginally different compared with that of an ideal ramp signal of equal amplitude. However, since it is a sinusoid, this difference can be accurately determined and later compensated from the measured ADC output. Thus, the corrected ADC output will correspond to the true ADC static nonlinearity. The implementation of the proposed method is discussed along with experimental results for two 8-b ADCs and one 10-b ADC which are then compared with the static characteristics estimated by the conventional DC method.