77 resultados para HISTORY - SOURCES
Resumo:
We present multifrequency Very Large Array (VLA) observations of two giant quasars, 0437-244 and 1025-229, from the Molonglo Complete Sample. These sources have well-defined FR II radio structure, possible one-sided jets, no significant depolarization between 1365 and 4935 MHz and low rotation measure (\ RM \ < 20 rad m(-2)). The giant sources are defined to be those with overall projected size greater than or equal to 1 Mpc. We have compiled a sample of about 50 known giant radio sources from the literature, and have compared some of their properties with a complete sample of 3CR radio sources of smaller sizes to investigate the evolution of giant sources, and test their consistency with the unified scheme for radio galaxies and quasars. We find an inverse correlation between the degree of core prominence and total radio luminosity, and show that the giant radio sources have similar core strengths to smaller sources of similar total luminosity. Hence their large sizes are unlikely to be caused by stronger nuclear activity. The degree of collinearity of the giant sources is also similar to that of the sample of smaller sources. The luminosity-size diagram shows that the giant sources are less luminous than our sample of smaller sized 3CR sources, consistent with evolutionary scenarios in which the giants have evolved from the smaller sources, losing energy as they expand to these large dimensions. For the smaller sources, radiative losses resulting from synchrotron radiation are more significant while for the giant sources the equipartition magnetic fields are smaller and inverse Compton lass owing to microwave background radiation is the dominant process. The radio properties of the giant radio galaxies and quasars are consistent with the unified scheme.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
Road transportation, as an important requirement of modern society, is presently hindered by restrictions in emission legislations as well as the availability of petroleum fuels, and as a consequence, the fuel cost. For nearly 270 years, we burned our fossil cache and have come to within a generation of exhausting the liquid part of it. Besides, to reduce the greenhouse gases, and to obey the environmental laws of most countries, it would be necessary to replace a significant number of the petroleum-fueled internal-combustion-engine vehicles (ICEVs) with electric cars in the near future. In this article, we briefly describe the merits and demerits of various proposed electrochemical systems for electric cars, namely the storage batteries, fuel cells and electrochemical supercapacitors, and determine the power and energy requirements of a modern car. We conclude that a viable electric car could be operated with a 50 kW polymer-electrolyte fuel cell stack to provide power for cruising and climbing, coupled in parallel with a 30 kW supercapacitor and/or battery bank to deliver additional short-term burst-power during acceleration.
Resumo:
The specified range of free chlorine residual (between minimum and maximum) in water distribution systems needs to be maintained to avoid deterioration of the microbial quality of water, control taste and/or odor problems, and hinder formation of carcino-genic disinfection by-products. Multiple water quality sources for providing chlorine input are needed to maintain the chlorine residuals within a specified range throughout the distribution system. The determination of source dosage (i.e., chlorine concentrations/chlorine mass rates) at water quality sources to satisfy the above objective under dynamic conditions is a complex process. A nonlinear optimization problem is formulated to determine the chlorine dosage at the water quality sources subjected to minimum and maximum constraints on chlorine concentrations at all monitoring nodes. A genetic algorithm (GA) approach in which decision variables (chlorine dosage) are coded as binary strings is used to solve this highly nonlinear optimization problem, with nonlinearities arising due to set-point sources and non-first-order reactions. Application of the model is illustrated using three sample water distribution systems, and it indicates that the GA,is a useful tool for evaluating optimal water quality source chlorine schedules.
Resumo:
We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.
Resumo:
During lightning strike to a tall grounded object (TGO), reflections of current waves are known to occur at either ends of the TGO. These reflection modify the channel current and hence, the lightning electromagnetic fields. This study aims to identify the possible contributing factors to reflection at a TGO-channel junction for the current waves ascending on the TGO. Possible sources of reflection identified are corona sheath and discontinuity of resistance and radius. For analyzing the contribution of corona sheath and discontinuity of resistance at the junction, a macroscopic physical model for the return stroke developed in our earlier work is employed. NEC-2D is used for assessing the contribution of abrupt change in radii at a TGO-channel junction. The wire-cage model adopted for the same is validated using laboratory experiments. Detailed investigation revealed the following. The main contributor for reflection at a TGO-channel junction is the difference between TGO and channel core radii. Also, the discontinuity of resistance at a TGO-channel junction can be of some relevance only for the first microsecond regime. Further, corona sheath does not play any significant role in the reflection.
Resumo:
In this paper we address the problem of transmission of correlated sources over a fading multiple access channel (MAC). We provide sufficient conditions for transmission with given distortions. Next these conditions are specialized to a Gaussian MAC (GMAC). Transmission schemes for discrete and Gaussian sources over a fading GMAC are considered. Various power allocation strategies are also compared. Keywords: Fading MAC, Power allocation, Random TDMA, Amplify and Forward, Correlated sources.
Resumo:
We consider the problem of distributed joint source-channel coding of correlated Gaussian sources over a Gaussian Multiple Access Channel (MAC). There may be side information at the encoders and/or at the decoder. First we specialize a general result in [16] to obtain sufficient conditions for reliable transmission over a Gaussian MAC. This system does not satisfy the source channel separation. Thus, next we study and compare three joint source channel coding schemes available in literature.
Resumo:
The Indian Ocean earthquake of 26 December 2004 led to significant ground deformation in the Andaman and Nicobar region, accounting for ~800 km of the rupture. Part of this article deals with coseismic changes along these islands, observable from coastal morphology, biological indicators, and Global Positioning System (GPS) data. Our studies indicate that the islands south of 10° N latitude coseismically subsided by 1–1.5 m, both on their eastern and western margins, whereas those to the north showed a mixed response. The western margin of the Middle Andaman emerged by >1 m, and the eastern margin submerged by the same amount. In the North Andaman, both western and eastern margins emerged by >1 m. We also assess the pattern of long-term deformation (uplift/subsidence) and attempt to reconstruct earthquake/tsunami history, with the available data. Geological evidence for past submergence includes dead mangrove vegetation dating to 740 ± 100 yr B.P., near Port Blair and peat layers at 2–4 m and 10–15 m depths observed in core samples from nearby locations. Preliminary paleoseismological/tsunami evidence from the Andaman and Nicobar region and from the east coast of India, suggest at least one predecessor for the 2004 earthquake 900–1000 years ago. The history of earthquakes, although incomplete at this stage, seems to imply that the 2004-type earthquakes are infrequent and follow variable intervals
Resumo:
This paper analyses the efficiency and productivity growth of the Electronic Sector of India in the liberalization era since 1991. The study gives an insight into the process of the growth of one of the most upcoming sector of this decade. This sector has experienced a vast structural change along with the changing economic structures in India after liberalisation. With the opening up of this sector to foreign market and incoming of multinational companies, the environment has become highly competitive. The law that operates is that of Darwin’s ‘Survival of the fittest’. Existing industries experience a continuous threat of exit due to entrance of new potential entrants. Thus, it becomes inevitable for the existing industries in this sector to improve productivity growth for their survival. It is thus important to analyze how the industries in this sector have performed over the years and what are the factors that have contributed to the overall output growth.
Resumo:
The use of electroacoustic analogies suggests that a source of acoustical energy (such as an engine, compressor, blower, turbine, loudspeaker, etc.) can be characterized by an acoustic source pressure ps and internal source impedance Zs, analogous to the open-circuit voltage and internal impedance of an electrical source. The present paper shows analytically that the source characteristics evaluated by means of the indirect methods are independent of the loads selected; that is, the evaluated values of ps and Zs are unique, and that the results of the different methods (including the direct method) are identical. In addition, general relations have been derived here for the transfer of source characteristics from one station to another station across one or more acoustical elements, and also for combining several sources into a single equivalent source. Finally, all the conclusions are extended to the case of a uniformly moving medium, incorporating the convective as well as dissipative effects of the mean flow.
Resumo:
In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The spectral index-luminosity relationship for steep-spectrum cores in galaxies and quasars has been investigated, and it is found that the sample of galaxies supports earlier suggestions of a strong correlation, while there is weak evidence for a similar relationship for the quasars. It is shown that a strong spectral index-luminosity correlation can be used to set an upper limit to the velocities of the radio-emitting material which is expelled from the nucleus in the form of collimated beams or jets having relativistic bulk velocities. The data on cores in galaxies indicate that the Lorentz factors of the radiating material are less than about 2.