959 resultados para Combined lower upper bound estimation (LUBE)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present paper, based on the principles of gauge/gravity duality we analytically compute the shear viscosity to entropy (eta/s) ratio corresponding to the super fluid phase in Einstein Gauss-Bonnet gravity. From our analysis we note that the ratio indeed receives a finite temperature correction below certain critical temperature (T < T-c). This proves the non universality of eta/s ratio in higher derivative theories of gravity. We also compute the upper bound for the Gauss-Bonnet coupling (lambda) corresponding to the symmetry broken phase and note that the upper bound on the coupling does not seem to change as long as we are close to the critical point of the phase diagram. However the corresponding lower bound of the eta/s ratio seems to get modified due to the finite temperature effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foundations of subsea infrastructure in deep water subjected to asymmetric environmental loads have underscored the importance of combined torsional and horizontal loading effects on the bearing capacity of rectangular shallow foundations. The purpose of this study is to investigate the undrained sliding and torsional bearing capacity of rectangular and square shallow foundations together with the interaction response under combined loading using three-dimensional finite element (3D-FE) analysis. Upper bound plastic limit analysis is employed to establish a reference value for horizontal and torsional bearing capacity, and an interaction relationship for the combined loading condition. Satisfactory agreement of plastic limit analysis (PLA) and 3D-FE results for ultimate capacity and interaction curves ensures that simple PLA solution could be used to evaluate the bearing capacity problem of foundation under combined sliding and torsion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the computation of lower/upper expectations that must cohere with a collection of probabilistic assessments and a collection of judgements of epistemic independence. New algorithms, based on multilinear programming, are presented, both for independence among events and among random variables. Separation properties of graphical models are also investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multiprocessor task graph scheduling problem has been extensively studied asacademic optimization problem which occurs in optimizing the execution time of parallelalgorithm with parallel computer. The problem is already being known as one of the NPhardproblems. There are many good approaches made with many optimizing algorithmto find out the optimum solution for this problem with less computational time. One ofthem is branch and bound algorithm.In this paper, we propose a branch and bound algorithm for the multiprocessor schedulingproblem. We investigate the algorithm by comparing two different lower bounds withtheir computational costs and the size of the pruned tree.Several experiments are made with small set of problems and results are compared indifferent sections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the existing literature, the existence conditions and design procedures for scalar functional observers are available for the cases where the observers’ order p is either p=1 or p=(v-1), where v is the observability index of the matrix pair (C,A). Therefore, if an observer with an order p=1 does not exist, the other available option is to use a higher order observer with p=(v-1). This paper shows that there exists another option that can be used to design scalar linear functional observers of the order lower than the well-known upper bound (v-1). The paper provides the existence conditions and a design procedure for scalar functional observers of order 0≤ p ≤2, and demonstrates the presented results with a numerical example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

lWe report on a search for second generation leptoquarks (LQ(2)) which decay into a muon plus quark in (p) over barp collisions at a center-of-mass energy of root s = 1.96 TeV in the DO detector using an integrated luminosity of about 300 pb(-1). No evidence for a leptoquark signal is observed and an upper bound on the product of the cross section for single leptoquark production times branching fraction into a quark and a muon was determined for second generation scalar leptoquaiks as a function of the leptoquark mass. This result has been combined with a previously published DO search for leptoquark pair production to obtain leptoquark mass limits as a function of the leptoquark-muon-quark coupling, lambda. Assuming lambda = 1, lower limits on the mass of a second generation scalar leptoquark coupling to a u quark and a muon are m(LQ2) > 274 GeV and m(LQ2) > 226 GeV for beta = 1 and beta = 1/2, respectively. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several statistical models can be used for assessing genotype X environment interaction (GEI) and studying genotypic stability. The objectives of this research were to show how (i) to use Bayesian methodology for computing Shukla's phenotypic stability variance and (ii) to incorporate prior information on the parameters for better estimation. Potato [Solanum tuberosum subsp. andigenum (Juz. & Bukasov) Hawkes], wheat (Triticum aestivum L.), and maize (Zea mays L.) multi environment trials (MET) were used for illustrating the application of the Bayes paradigm. The potato trial included 15 genotypes, but prior information for just three genotypes was used. The wheat trial used prior information on all 10 genotypes included in the trial, whereas for the maize trial, noninformative priors for the nine genotypes was used. Concerning the posterior distribution of the genotypic means, the maize MET with 20 sites gave less disperse posterior distributions of the genotypic means than did the posterior distribution of the genotypic means of the other METs, which included fewer environments. The Bayesian approach allows use of other statistical strategies such as the normal truncated distribution (used in this study). When analyzing grain yield, a lower bound of zero and an upper bound set by the researcher's experience can be used. The Bayesian paradigm offers plant breeders the possibility of computing the probability of a genotype being the best performer. The results of this study show that although some genotypes may have a very low probability of being the best in all sites, they have a relatively good chance of being among the five highest yielding genotypes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of the lower flammability limits of C-H compounds at 25 degrees C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 degrees C H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C H compounds. When tested for a temperature range from 5 degrees C to 100 degrees C , the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Laurentide Ice Sheet (LIS) was a large, dynamic ice sheet in the early Holocene. The glacial events through Hudson Strait leading to its eventual demise are recorded in the well-dated Labrador shelf core, MD99-2236 from the Cartwright Saddle. We develop a detailed history of the timing of ice-sheet discharge events from the Hudson Strait outlet of the LIS during the Holocene using high-resolution detrital carbonate, ice rafted detritus (IRD), d18O, and sediment color data. Eight detrital carbonate peaks (DCPs) associated with IRD peaks and light oxygen isotope events punctuate the MD99-2236 record between 11.5 and 8.0 ka. We use the stratigraphy of the DCPs developed from MD99-2236 to select the appropriate DeltaR to calibrate the ages of recorded glacial events in Hudson Bay and Hudson Strait such that they match the DCPs in MD99-2236. We associate the eight DCPs with H0, Gold Cove advance, Noble Inlet advance, initial retreat of the Hudson Strait ice stream (HSIS) from Hudson Strait, opening of the Tyrrell Sea, and drainage of glacial lakes Agassiz and Ojibway. The opening of Foxe Channel and retreat of glacial ice from Foxe Basin are represented by a shoulder in the carbonate data. DeltaR of 350 years applied to the radiocarbon ages constraining glacial events H0 through the opening of the Tyrell Sea provided the best match with the MD99-2236 DCPs; DeltaR values and ages from the literature are used for the younger events. A very close age match was achieved between the 8.2 ka cold event in the Greenland ice cores, DCP7 (8.15 ka BP), and the drainage of glacial lakes Agassiz and Ojibway. Our stratigraphic comparison between the DCPs in MD99-2236 and the calibrated ages of Hudson Strait/Bay deglacial events shows that the retreat of the HSIS, the opening of the Tyrell Sea, and the catastrophic drainage of glacial lakes Agassiz and Ojibway at 8.2 ka are separate events that have been combined in previous estimates of the timing of the 8.2 ka event from marine records. SW Iceland shelf core MD99-2256 documents freshwater entrainment into the subpolar gyre from the Hudson Strait outlet via the Labrador, North Atlantic, and Irminger currents. The timing of freshwater release from the LIS Hudson Strait outlet in MD99-2236 matches evidence for freshwater forcing and LIS icebergs carrying foreign minerals to the SW Iceland shelf between 11.5 and 8.2 ka. The congruency of these records supports the conclusion of the entrainment of freshwater from the retreat of the LIS through Hudson Strait into the subpolar gyre and provides specific time periods when pulses of LIS freshwater were present to influence climate.