182 resultados para Combinations.
Resumo:
It is now clearly understood that atmospheric aerosols have a significant impact on climate due to their important role in modifying the incoming solar and outgoing infrared radiation. The question of whether aerosol cools (negative forcing) or warms (positive forcing) the planet depends on the relative dominance of absorbing aerosols. Recent investigations over the tropical Indian Ocean have shown that, irrespective of the comparatively small percentage contribution in optical depth (similar to11%), soot has an important role in the overall radiative forcing. However, when the amount of absorbing aerosols such as soot are significant, aerosol optical depth and chemical composition are not the only determinants of aerosol climate effects, but the altitude of the aerosol layer and the altitude and type of clouds are also important. In this paper, the aerosol forcing in the presence of clouds and the effect of different surface types (ocean, soil, vegetation, and different combinations of soil and vegetation) are examined based on model simulations, demonstrating that aerosol forcing changes sign from negative (cooling) to positive (warming) when reflection from below (either due to land or clouds) is high.
Resumo:
Representatives of several Internet access providers have expressed their wish to see a substantial change in the pricing policies of the Internet. In particular, they would like to see content providers pay for use of the network, given the large amount of resources they use. This would be in clear violation of the �network neutrality� principle that had characterized the development of the wireline Internet. Our first goal in this paper is to propose and study possible ways of implementing such payments and of regulating their amount. We introduce a model that includes the internaut�s behavior, the utilities of the ISP and of the content providers, and the monetary flow that involves the internauts, the ISP and content provider, and in particular, the content provider�s revenues from advertisements. We consider various game models and study the resulting equilibrium; they are all combinations of a noncooperative game (in which the service and content providers determine how much they will charge the internauts) with a cooperative one - the content provider and the service provider bargain with each other over payments to one another. We include in our model a possible asymmetric bargaining power which is represented by a parameter (that varies between zero to one). We then extend our model to study the case of several content providers. We also provide a very brief study of the equilibria that arise when one of the content providers enters into an exclusive contract with the ISP.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
The basic characteristic of a chaotic system is its sensitivity to the infinitesimal changes in its initial conditions. A limit to predictability in chaotic system arises mainly due to this sensitivity and also due to the ineffectiveness of the model to reveal the underlying dynamics of the system. In the present study, an attempt is made to quantify these uncertainties involved and thereby improve the predictability by adopting a multivariate nonlinear ensemble prediction. Daily rainfall data of Malaprabha basin, India for the period 1955-2000 is used for the study. It is found to exhibit a low dimensional chaotic nature with the dimension varying from 5 to 7. A multivariate phase space is generated, considering a climate data set of 16 variables. The chaotic nature of each of these variables is confirmed using false nearest neighbor method. The redundancy, if any, of this atmospheric data set is further removed by employing principal component analysis (PCA) method and thereby reducing it to eight principal components (PCs). This multivariate series (rainfall along with eight PCs) is found to exhibit a low dimensional chaotic nature with dimension 10. Nonlinear prediction employing local approximation method is done using univariate series (rainfall alone) and multivariate series for different combinations of embedding dimensions and delay times. The uncertainty in initial conditions is thus addressed by reconstructing the phase space using different combinations of parameters. The ensembles generated from multivariate predictions are found to be better than those from univariate predictions. The uncertainty in predictions is decreased or in other words predictability is increased by adopting multivariate nonlinear ensemble prediction. The restriction on predictability of a chaotic series can thus be altered by quantifying the uncertainty in the initial conditions and also by including other possible variables, which may influence the system. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Recently, composite reinforcements in which combinations of materials and material forms such as strips, grids, and strips and anchors, depending on requirements have proven to be effective in various ground improvement applications. Composite geogrids studied in this paper belong to the category of composite reinforcements and are useful for bearing capacity improvement. The paper presents evaluation of results of bearing capacity tests conducted oil a composite geogrid, made of composite reinforcement consisting of steel and cement mortar. The study shows that the behavior of composite reinforcements follows the general trends observed in the case of conventional geogrids, with reference to the depth of first layer below the footing, number of layers of reinforcement, and vertical spacing of the reinforcement. Results show that the performance is comparable to that of a conventional polymer geogrid.
Resumo:
We propose a new method for evaluating the adsorbed phase volume during physisorption of several gases on activated carbon specimens. We treat the adsorbed phase as another equilibrium phase which satisfies the Gibbs equation and hence assume that the law of rectilinear diameters is applicable. Since invariably the bulk gas phase densities are known along measured isotherms, the constants of the adsorbed phase volume can be regressed from the experimental data. We take the Dubinin-Astakhov isotherm as the model for verifying our hypothesis since it is one of the few equations that accounts for adsorbed phase volume changes. In addition, the pseudo-saturation pressure in the supercritical region is calculated by letting the index of the temperature term in Dubinin's equation to be temperature dependent. Based on over 50 combinations of activated carbons and adsorbates (nitrogen, oxygen, argon, carbon dioxide, hydrocarbons and halocarbon refrigerants) it is observed that the proposed changes fit experimental data quite well.
Resumo:
The problem of finding the horizontal pullout capacity of vertical anchors embedded in sands with the inclusion of pseudostatic horizontal earthquake body forces, was tackled in this note. The analysis was carried out using an upper bound limit analysis, with the consideration of two different collapse mechanisms: bilinear and composite logarithmic spiral rupture surfaces. The results are presented in nondimensional form to find the pullout resistance with changes in earthquake acceleration for different combinations of embedment ratio of the anchor (lambda), friction angle of the soil (phi), and the anchor-soil interface wall friction angle (delta). The pullout resistance decreases quite substantially with increases in the magnitude of the earthquake acceleration. For values of delta up to about 0.25-0.5phi, the bilinear and composite logarithmic spiral rupture surfaces gave almost identical answers, whereas for higher values of delta, the choice of the logarithmic spiral provides significantly smaller pullout resistance. The results compare favorably with the existing theoretical data.
Resumo:
The applicability of the confusion principle and size factor in glass formation has been explored by following different combinations of isoelectronic Ti, Zr and Hf metals. Four alloys of nominal composition Zr41.5Ti41.5Ni17, Zr41.5Hf41.5Ni17, Zr25Ti25Cu50 and Zr34Ti16Cu50 have been rapidly solidified to obtain an amorphous phase and their crystallisation behaviour has been studied. The Ti-Zr-Ni alloy crystallises in three steps. Initially this alloy precipitates icosahedral quasicrystalline phase, which on further heat treatment precipitates cF96 Zr2Ni phase. The Zr-Hf-Ni alloy can not be amorphised under the same experimental conditions. The amorphous Zr-Ti-Cu alloys at the initial stages of crystallisation phase-separateinto two amorphous phases and then on further heat treatment cF24 Cu5Zr and oC68 Cu10Zr7 phase are precipitated. The lower glass-forming abilityof Zr-Hf-Ni alloy and the crystallisation behaviour of the above alloys has been studied. The rationale behind nanoquasicrystallisation and the formation of other intermetallic phases has been explained.
Resumo:
We study the possible effects of CP violation in the Higgs sector on t (t) over bar production at a gammagamma collider. These studies are performed in a model-independent way in terms of six form factors {R(S-gamma), J(S-gamma), R(P-gamma), J(P-gamma), S-t, P-t} which parametrize the CP mixing in the Higgs sector, and a strategy for their determination is developed. We observe that the angular distribution of the decay lepton from t/(t) over bar produced in this process is independent of any CP violation in the tbW vertex and hence best suited for studying CP mixing in the Higgs sector. Analytical expressions are obtained for the angular distribution of leptons in the c.m. frame of the two colliding photons for a general polarization state of the incoming photons. We construct combined asymmetries in the initial state lepton (photon) polarization and the final state lepton charge. They involve CP even (x's) and odd (y's) combinations of the mixing parameters. We study limits up to which the values of x and y, with only two of them allowed to vary at a time, can be probed by measurements of these asymmetries, using circularly polarized photons. We use the numerical values of the asymmetries predicted by various models to discriminate among them. We show that this method can be sensitive to the loop-induced CP violation in the Higgs sector in the minimal supersymmetric standard model.
Resumo:
Infection of the skin or throat by Streptococcus dysgalactiae subspecies equisimilis (SDSE) may result in a number of human diseases. To understand mechanisms that give rise to new genetic variants in this species, we used multi-locus sequence typing (MLST) to characterise relationships in the SDSE population from India, a country where streptococcal disease is endemic. The study revealed Indian SDSE isolates have sequence types (STs) predominantly different to those reported from other regions of the world. Emm-ST combinations in India are also largely unique. Split decomposition analysis, the presence of emm-types in unrelated clonal complexes, and analysis of phylogenetic trees based on concatenated sequences all reveal an extensive history of recombination within the population. The ratio of recombination to mutation (r/m) events (11:1) and per site r/m ratio (41:1) in this population is twice as high as reported for SDSE from non-endemic regions. Recombination involving the emm-gene is also more frequent than recombination involving housekeeping genes, consistent with diversification of M proteins offering selective advantages to the pathogen. Our data demonstrate that genetic recombination in endemic regions is more frequent than non-endemic regions, and gives rise to novel local SDSE variants, some of which may have increased fitness or pathogenic potential.
Resumo:
With extensive use of dynamic voltage scaling (DVS) there is increasing need for voltage scalable models. Similarly, leakage being very sensitive to temperature motivates the need for a temperature scalable model as well. We characterize standard cell libraries for statistical leakage analysis based on models for transistor stacks. Modeling stacks has the advantage of using a single model across many gates there by reducing the number of models that need to be characterized. Our experiments on 15 different gates show that we needed only 23 models to predict the leakage across 126 input vector combinations. We investigate the use of neural networks for the combined PVT model, for the stacks, which can capture the effect of inter die, intra gate variations, supply voltage(0.6-1.2 V) and temperature (0 - 100degC) on leakage. Results show that neural network based stack models can predict the PDF of leakage current across supply voltage and temperature accurately with the average error in mean being less than 2% and that in standard deviation being less than 5% across a range of voltage, temperature.
Resumo:
Validation of the flux partitioning of species model has been illustrated. Various combinations of inequality expression for the fluxes of species A and B in two successively grown hypothetical intermetallic phases in the interdiffusion zone have been considered within the constraints of this concept. Furthermore, ratio of intrinsic diffusivities of the species A and B in those two phases has been correlated in four different cases. Moreover, complete and or partial validation or invalidation of this model with respect to both the species, has been proven theoretically and also discussed with the Co-Si system as an example.
Resumo:
Of the similar to 4000 ORFs identified through the genome sequence of Mycobacterium tuberculosis (TB) H37Rv, experimentally determined structures are available for 312. Since knowledge of protein structures is essential to obtain a high-resolution understanding of the underlying biology, we seek to obtain a structural annotation for the genome, using computational methods. Structural models were obtained and validated for similar to 2877 ORFs, covering similar to 70% of the genome. Functional annotation of each protein was based on fold-based functional assignments and a novel binding site based ligand association. New algorithms for binding site detection and genome scale binding site comparison at the structural level, recently reported from the laboratory, were utilized. Besides these, the annotation covers detection of various sequence and sub-structural motifs and quaternary structure predictions based on the corresponding templates. The study provides an opportunity to obtain a global perspective of the fold distribution in the genome. The annotation indicates that cellular metabolism can be achieved with only 219 folds. New insights about the folds that predominate in the genome, as well as the fold-combinations that make up multi-domain proteins are also obtained. 1728 binding pockets have been associated with ligands through binding site identification and sub-structure similarity analyses. The resource (http://proline.physics.iisc.ernet.in/Tbstructuralannotation), being one of the first to be based on structure-derived functional annotations at a genome scale, is expected to be useful for better understanding of TB and for application in drug discovery. The reported annotation pipeline is fairly generic and can be applied to other genomes as well.
Resumo:
Parallel sub-word recognition (PSWR) is a new model that has been proposed for language identification (LID) which does not need elaborate phonetic labeling of the speech data in a foreign language. The new approach performs a front-end tokenization in terms of sub-word units which are designed by automatic segmentation, segment clustering and segment HMM modeling. We develop PSWR based LID in a framework similar to the parallel phone recognition (PPR) approach in the literature. This includes a front-end tokenizer and a back-end language model, for each language to be identified. Considering various combinations of the statistical evaluation scores, it is found that PSWR can perform as well as PPR, even with broad acoustic sub-word tokenization, thus making it an efficient alternative to the PPR system.