101 resultados para Empirical orthogonal function
Resumo:
A careful comparison of the distribution in the (R, θ)-plane of all NH ... O hydrogen bonds with that for bonds between neutral NH and neutral C=O groups indicated that the latter has a larger mean R and a wider range of θ and that the distribution was also broader than for the average case. Therefore, the potential function developed earlier for an average NH ... O hydrogen bond was modified to suit the peptide case. A three-parameter expression of the form {Mathematical expression}, with △ = R - Rmin, was found to be satisfactory. By comparing the theoretically expected distribution in R and θ with observed data (although limited), the best values were found to be p1 = 25, p3 = - 2 and q1 = 1 × 10-3, with Rmin = 2·95 Å and Vmin = - 4·5 kcal/mole. The procedure for obtaining a smooth transition from Vhb to the non-bonded potential Vnb for large R and θ is described, along with a flow chart useful for programming the formulae. Calculated values of ΔH, the enthalpy of formation of the hydrogen bond, using this function are in reasonable agreement with observation. When the atoms involved in the hydrogen bond occur in a five-membered ring as in the sequence[Figure not available: see fulltext.] a different formula for the potential function is needed, which is of the form Vhb = Vmin +p1△2 +q1x2 where x = θ - 50° for θ ≥ 50°, with p1 = 15, q1 = 0·002, Rmin = 2· Å and Vmin = - 2·5 kcal/mole. © 1971 Indian Academy of Sciences.
Resumo:
A technique based on empirical orthogonal functions is used to estimate hydrologic time-series variables at ungaged locations. The technique is applied to estimate daily and monthly rainfall, temperature and runoff values. The accuracy of the method is tested by application to locations where data are available. The second-order characteristics of the estimated data are compared with those of the observed data. The results indicate that the method is quick and accurate.
Resumo:
The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Supercritical processes are gaining importance in the last few years in the food, environmental and pharmaceutical product processing. The design of any supercritical process needs accurate experimental data on solubilities of solids in the supercritical fluids (SCFs). The empirical equations are quite successful in correlating the solubilities of solid compounds in SCF both in the presence and absence of cosolvents. In this work, existing solvate complex models are discussed and a new set of empirical equations is proposed. These equations correlate the solubilities of solids in supercritical carbon dioxide (both in the presence and absence of cosolvents) as a function of temperature, density of supercritical carbon dioxide and the mole fraction of cosolvent. The accuracy of the proposed models was evaluated by correlating 15 binary and 18 ternary systems. The proposed models provided the best overall correlations. (C) 2009 Elsevier BA/. All rights reserved.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.
Resumo:
In this paper we establish that the Lovasz theta function on a graph can be restated as a kernel learning problem. We introduce the notion of SVM-theta graphs, on which Lovasz theta function can be approximated well by a Support vector machine (SVM). We show that Erdos-Renyi random G(n, p) graphs are SVM-theta graphs for log(4)n/n <= p < 1. Even if we embed a large clique of size Theta(root np/1-p) in a G(n, p) graph the resultant graph still remains a SVM-theta graph. This immediately suggests an SVM based algorithm for recovering a large planted clique in random graphs. Associated with the theta function is the notion of orthogonal labellings. We introduce common orthogonal labellings which extends the idea of orthogonal labellings to multiple graphs. This allows us to propose a Multiple Kernel learning (MKL) based solution which is capable of identifying a large common dense subgraph in multiple graphs. Both in the planted clique case and common subgraph detection problem the proposed solutions beat the state of the art by an order of magnitude.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
We find in complementary experiments and event-driven simulations of sheared inelastic hard spheres that the velocity autocorrelation function psi(t) decays much faster than t(-3/2) obtained for a fluid of elastic spheres at equilibrium. Particle displacements are measured in experiments inside a gravity-driven flow sheared by a rough wall. The average packing fraction obtained in the experiments is 0.59, and the packing fraction in the simulations is varied between 0.5 and 0.59. The motion is observed to be diffusive over long times except in experiments where there is layering of particles parallel to boundaries, and diffusion is inhibited between layers. Regardless, a rapid decay of psi(t) is observed, indicating that this is a feature of the sheared dissipative fluid, and is independent of the details of the relative particle arrangements. An important implication of our study is that the non-analytic contribution to the shear stress may not be present in a sheared inelastic fluid, leading to a wider range of applicability of kinetic theory approaches to dense granular matter.
Resumo:
Although LH is essential for survival and function of the corpus luteum (CL) in higher primates, luteolysis occurs during nonfertile cycles without a discernible decrease in circulating LH levels. Using genome-wide expression analysis, several experiments were performed to examine the processes of luteolysis and rescue of luteal function in monkeys. Induced luteolysis with GnRH receptor antagonist (Cetrorelix) resulted in differential regulation of 3949 genes, whereas replacement with exogenous LH (Cetrorelix plus LH) led to regulation of 4434 genes (1563 down-regulation and 2871 up-regulation). A model system for prostaglandin (PG) F-2 alpha-induced luteolysis in the monkey was standardized and demonstrated that PGF(2 alpha) regulated expression of 2290 genes in the CL. Analysis of the LH-regulated luteal transcriptome revealed that 120 genes were regulated in an antagonistic fashion by PGF(2 alpha). Based on the microarray data, 25 genes were selected for validation by real-time RT-PCR analysis, and expression of these genes was also examined in the CL throughout the luteal phase and from monkeys treated with human chorionic gonadotropin (hCG) to mimic early pregnancy. The results indicated changes in expression of genes favorable to PGF(2 alpha) action during the late to very late luteal phase, and expressions of many of these genes were regulated in an opposite manner by exogenous hCG treatment. Collectively, the findings suggest that curtailment of expression of downstream LH-target genes possibly through PGF(2 alpha) action on the CL is among the mechanisms underlying cross talk between the luteotropic and luteolytic signaling pathways that result in the cessation of luteal function, but hCG is likely to abrogate the PGF(2 alpha)-responsive gene expression changes resulting in luteal rescue crucial for the maintenance of early pregnancy. (Endocrinology 150: 1473-1484, 2009)
Resumo:
Immunization of proven fertile adult male monkeys (n = 3) with a recombinant FSH receptor protein preparation (oFSHR-P) (representing amino acids 1-134 of the extracellular domain of the receptor Mr similar to 15KDa) resulted in production of receptor blocking antibodies. The ability of the antibody to bind a particulate FSH receptor preparation and receptors in intact granulosa cells was markedly (by 30-80%) inhibited by FSH. Serum T levels and LH receptor function following immunization remained unchanged. The immunized monkeys showed a 50% reduction (p<0.001) in transformation of spermatogonia(2C) to primary spermatocytes (4C) as determined by flow cytometry and the 4C:2C ratio showed a correlative change (R 0.81, p<0.0007) with reduction in fertility index (sperm counts X motility score). Breeding studies indicated that monkeys became infertile between 242-368 days of immunization when the fertility index was in the range of 123+/-76 to 354+/-42 (compared to a value of 1602+/-384 on day 0). As the effects observed ate near identical to that seen following immunization with FSH it is suggestive that oFSHR-P can substitute for FSH in the development of a contraceptive vaccine.
Resumo:
We apply the method of multiple scales (MMS) to a well known model of regenerative cutting vibrations in the large delay regime. By ``large'' we mean the delay is much larger than the time scale of typical cutting tool oscillations. The MMS upto second order for such systems has been developed recently, and is applied here to study tool dynamics in the large delay regime. The second order analysis is found to be much more accurate than first order analysis. Numerical integration of the MMS slow flow is much faster than for the original equation, yet shows excellent accuracy. The main advantage of the present analysis is that infinite dimensional dynamics is retained in the slow flow, while the more usual center manifold reduction gives a planar phase space. Lower-dimensional dynamical features, such as Hopf bifurcations and families of periodic solutions, are also captured by the MMS. Finally, the strong sensitivity of the dynamics to small changes in parameter values is seen clearly.
Resumo:
Texture evolution in a low cost beta titanium alloy was studied for different modes of rolling and heat treatments. The alloy was cold rolled by unidirectional and multi-step cross rolling. The cold rolled material was either aged directly or recrystallized and then aged. The evolution of texture in alpha and beta phases were studied. The rolling texture of beta phase that is characterized by the gamma fiber is stronger for MSCR than UDR; while the trend is reversed on recrystallization. The mode of rolling affects alpha transformation texture on aging with smaller alpha lath size and stronger alpha texture in UDR than in MSCR. The defect structure in beta phase influences the evolution of a texture on aging. A stronger defect structure in beta phase leads to variant selection with the rolled samples showing fewer variants than the recrystallized samples.