219 resultados para Large property
Resumo:
Parabolized stability equation (PSE) models are being deve loped to predict the evolu-tion of low-frequency, large-scale wavepacket structures and their radiated sound in high-speed turbulent round jets. Linear PSE wavepacket models were previously shown to be in reasonably good agreement with the amplitude envelope and phase measured using a microphone array placed just outside the jet shear layer. 1,2 Here we show they also in very good agreement with hot-wire measurements at the jet center line in the potential core,for a different set of experiments. 3 When used as a model source for acoustic analogy, the predicted far field noise radiation is in reasonably good agreement with microphone measurements for aft angles where contributions from large -scale structures dominate the acoustic field. Nonlinear PSE is then employed in order to determine the relative impor-tance of the mode interactions on the wavepackets. A series of nonlinear computations with randomized initial conditions are use in order to obtain bounds for the evolution of the modes in the natural turbulent jet flow. It was found that n onlinearity has a very limited impact on the evolution of the wavepackets for St≥0. 3. Finally, the nonlinear mechanism for the generation of a low-frequency mode as the difference-frequency mode 4,5 of two forced frequencies is investigated in the scope of the high Reynolds number jets considered in this paper.
Resumo:
We present a simple model that can be used to account for the rheological behaviour observed in recent experiments on micellar gels. The model combines attachment detachment kinetics with stretching due to shear, and shows well-defined jammed and flowing states. The large-deviation function (LDF) for the coarse-grained velocity becomes increasingly non-quadratic as the applied force F is increased, in a range near the yield threshold. The power fluctuations are found to obey a steady-state fluctuation relation (FR) at small F. However, the FR is violated when F is near the transition from the flowing to the jammed state although the LDF still exists; the antisymmetric part of the LDF is found to be nonlinear in its argument. Our approach suggests that large fluctuations and motion in a direction opposite to an imposed force are likely to occur in a wider class of systems near yielding.
Resumo:
We report a special, hitherto-unexplored property of (-)-epigallocatechin gallate (EGCG) as a chiral solvating agent for enantiodiscrimination of alpha-amino acids in the polar solvent DMSO. This phenomenon has been investigated by H-1 NMR spectroscopy. The mechanism of the interaction property of EGCG with alpha-amino acids has been understood as arising out of hydrogen-bonded noncovalent interactions, where the -OH groups of two phenyl rings of EGCG play dominant roles. The conversion of the enantiomeric mixture into diastereomers yielded well-resolved peaks for D and L amino acids permitting the precise measurement of enantiomeric composition. Often one encounters complex situations when the spectra are severely overlapped or partially resolved hampering the testing of enantiopurity and the precise measurement of enantiomeric excess (ee). Though higher concentration of EGCG yielded better discrimination, the use of lower concentration being economical, we have exploited an appropriate 2D NMR experiment in overcoming such problems. Thus, in the present study we have successfully demonstrated the utility of the bioflavonoid (-)-EGCG, a natural product as a chiral solvating agent for the discrimination of large number of alpha-amino acids in a polar solvent DMSO. Another significant advantage of this new chiral sensing agent is that it is a natural product and does not require tedious multistep synthesis unlike many other chiral auxiliaries.
Resumo:
In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.
Resumo:
The paper reports effect of small ternary addition of In on the microstructure, mechanical property and oxidation behaviour of a near eutectic suction cast Nb-19.1 at-%Si-1.5 at-%In alloy. The observed microstructure consists of a combination of two kinds of lamellar structure. They are metal-intermetallic combinations of Nb-ss-beta-Nb5Si3 and Nb-ss-alpha-Nb5Si3 respectively having 40-60 nm lamellar spacings. The alloy gives compressive strength of 3 GPa and engineering strain of similar to 3% at room temperature. The composite structure also exhibits a large improvement in oxidation resistance at high temperature (1000 degrees C).
Resumo:
The role of crystallite size and clustering in influencing the stability of the structures of a large tetragonality ferroelectric system 0.6BiFeO(3)-0.4PbTiO(3) was investigated. The system exhibits cubic phase for a crystallite size similar to 25 nm, three times larger than the critical size reported for one of its end member PbTiO3. With increased degree of clustering for the same average crystallite size, partial stabilization of the ferroelectric tetragonal phase takes place. The results suggest that clustering helps in reducing the depolarization energy without the need for increasing the crystallite size of free particles.
Resumo:
Here, we present a comprehensive investigation of the dc magnetization and magnetotransport studies on La0.85Sr0.15CoO3 single crystals grown by the optical float zone method. The spin freezing temperature in the ac susceptibility study shifts to lower value at higher dc field and this is well described by the de Almeida-Thouless line which is the characteristic of SG behavior. The Magnetotransport study shows that the sample exhibits a huge negative MR of similar to 70% at 10 K which monotonically decreases with the increase in temperature. Besides, the magnetization and the resistivity relaxation give strong indication that the MR scales with sample's magnetization. In essence, all the present experimental findings evidence the SG behavior of La0.85Sr0.15CoO3 single crystals.
Resumo:
Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.
Resumo:
The 2004 Sumatra-Andaman earthquake was unprecedented in terms of its magnitude (M-w 9.2), rupture length along the plate boundary (1300 km) and size of the resultant tsunami. Since 2004, efforts are being made to improve the understanding of the seismic hazard in the Sumatra-Andaman subduction zone in terms of recurrence patterns of major earthquakes and tsunamis. It is reasonable to assume that previous earthquake events in the Myanmar Andaman segment must be preserved in the geological record in the form of seismo-turbidite sequences. Here we present the prospects of conducting deep ocean palaeoseismicity investigations in order to refine the quantification of the recurrence pattern of large subduction-zone earthquakes along the Andaman-Myanmar arc. Our participation in the Sagar Kanya cruise SK-273 (in June 2010) was to test the efficacy of such a survey. The primary mission of the cruise, along a short length (300 km) of the Sumatra Andaman subduction front was to collect bathymetric data of the ocean floor trenchward of the Andaman Islands. The agenda of our piggyback survey was to fix potential coring sites that might preserve seismo-turbidite deposits. In this article we present the possibilities and challenges of such an exercise and our first-hand experience of such a preliminary survey. This account will help future researchers with similar scientific objectives who would want to survey the deep ocean archives of this region for evidence of extreme events like major earthquakes.
Resumo:
Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
Mycobacterium tuberculosis owes its high pathogenic potential to its ability to evade host immune responses and thrive inside the macrophage. The outcome of infection is largely determined by the cellular response comprising a multitude of molecular events. The complexity and inter-relatedness in the processes makes it essential to adopt systems approaches to study them. In this work, we construct a comprehensive network of infection-related processes in a human macrophage comprising 1888 proteins and 14,016 interactions. We then compute response networks based on available gene expression profiles corresponding to states of health, disease and drug treatment. We use a novel formulation for mining response networks that has led to identifying highest activities in the cell. Highest activity paths provide mechanistic insights into pathogenesis and response to treatment. The approach used here serves as a generic framework for mining dynamic changes in genome-scale protein interaction networks.
Resumo:
The moments of the hadronic spectral functions are of interest for the extraction of the strong coupling alpha(s) and other QCD parameters from the hadronic decays of the tau lepton. Motivated by the recent analyses of a large class of moments in the standard fixed-order and contour-improved perturbation theories, we consider the perturbative behavior of these moments in the framework of a QCD nonpower perturbation theory, defined by the technique of series acceleration by conformal mappings, which simultaneously implements renormalization-group summation and has a tame large-order behavior. Two recently proposed models of the Adler function are employed to generate the higher-order coefficients of the perturbation series and to predict the exact values of the moments, required for testing the properties of the perturbative expansions. We show that the contour-improved nonpower perturbation theories and the renormalization-group-summed nonpower perturbation theories have very good convergence properties for a large class of moments of the so-called ``reference model,'' including moments that are poorly described by the standard expansions. The results provide additional support for the plausibility of the description of the Adler function in terms of a small number of dominant renormalons.
Resumo:
Glasses in the x(BaO-TiO2)-B2O3 (x = 0.25, 0.5, 0.75, and 1 mol.) system were fabricated via the conventional melt-quenching technique. Thermal stability and glass-forming ability as determined by differential thermal analysis (DTA) were found to increase with increasing BaO-TiO2 (BT) content. However, there was no noticeable change in the glass transition temperature (T-g). This was attributed to the active participation of TiO2 in the network formation especially at higher BT contents via the conversion of the TiO6 structural units into TiO4 units, which increased the connectivity and resulted in an increase in crystallization temperature. Dielectric and optical properties at room temperature were studied for all the glasses under investigation. Interestingly, these glasses were found to be hydrophobic. The results obtained were correlated with different structural units and their connectivity in the glasses.