943 resultados para Branch and bounds
Resumo:
Vialaea minutella was consistently isolated from infected mango trees showing branch dieback symptoms in northern Queensland. The fungus was identified by morphology and confirmed with molecular sequence data. This is the first report of V. minutella in Australia. The systematic position of Vialaea was confirmed to be in the Xylariales based on reconstructed LSU sequence data.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
We present a general formalism for deriving bounds on the shape parameters of the weak and electromagnetic form factors using as input correlators calculated from perturbative QCD, and exploiting analyticity and unitarily. The values resulting from the symmetries of QCD at low energies or from lattice calculations at special points inside the analyticity domain can be included in an exact way. We write down the general solution of the corresponding Meiman problem for an arbitrary number of interior constraints and the integral equations that allow one to include the phase of the form factor along a part of the unitarity cut. A formalism that includes the phase and some information on the modulus along a part of the cut is also given. For illustration we present constraints on the slope and curvature of the K-l3 scalar form factor and discuss our findings in some detail. The techniques are useful for checking the consistency of various inputs and for controlling the parameterizations of the form factors entering precision predictions in flavor physics.
Resumo:
Transfer function coefficients (TFC) are widely used to test linear analog circuits for parametric and catastrophic faults. This paper presents closed form expressions for an upper bound on the defect level (DL) and a lower bound on fault coverage (FC) achievable in TFC based test method. The computed bounds have been tested and validated on several benchmark circuits. Further, application of these bounds to scalable RC ladder networks reveal a number of interesting characteristics. The approach adopted here is general and can be extended to find bounds of DL and FC of other parametric test methods for linear and non-linear circuits.
Resumo:
We generalize the Nozieres-Schmitt-Rink method to study the repulsive Fermi gas in the absence of molecule formation, i.e., in the so-called ``upper branch.'' We find that the system remains stable except close to resonance at sufficiently low temperatures. With increasing scattering length, the energy density of the system attains a maximum at a positive scattering length before resonance. This is shown to arise from Pauli blocking which causes the bound states of fermion pairs of different momenta to disappear at different scattering lengths. At the point of maximum energy, the compressibility of the system is substantially reduced, leading to a sizable uniform density core in a trapped gas. The change in spin susceptibility with increasing scattering length is moderate and does not indicate any magnetic instability. These features should also manifest in Fermi gases with unequal masses and/or spin populations.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
We use the recently measured accurate BaBaR data on the modulus of the pion electromagnetic form factor,Fπ(t), up to an energy of 3 GeV, the I=1P-wave phase of the π π scattering ampli-tude up to the ω−π threshold, the pion charge radius known from Chiral Perturbation Theory,and the recently measured JLAB value of Fπ in the spacelike region at t=−2.45GeV2 as inputs in a formalism that leads to bounds on Fπ in the intermediate spacelike region. We compare our constraints with experimental data and with perturbative QCD along with the results of several theoretical models for the non-perturbative contribution s proposed in the literature.
Resumo:
We consider Ricci flow invariant cones C in the space of curvature operators lying between the cones ``nonnegative Ricci curvature'' and ``nonnegative curvature operator''. Assuming some mild control on the scalar curvature of the Ricci flow, we show that if a solution to the Ricci flow has its curvature operator which satisfies R + epsilon I is an element of C at the initial time, then it satisfies R + epsilon I is an element of C on some time interval depending only on the scalar curvature control. This allows us to link Gromov-Hausdorff convergence and Ricci flow convergence when the limit is smooth and R + I is an element of C along the sequence of initial conditions. Another application is a stability result for manifolds whose curvature operator is almost in C. Finally, we study the case where C is contained in the cone of operators whose sectional curvature is nonnegative. This allows us to weaken the assumptions of the previously mentioned applications. In particular, we construct a Ricci flow for a class of (not too) singular Alexandrov spaces.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
A new 2-D quality-guided phase-unwrapping algorithm, based on the placement of the branch cuts, is presented. Its framework consists of branch cut placing guided by an original quality map and reliability ordering performed on a final quality map. To improve the noise immunity of the new algorithm, a new quality map, which is used as the original quality map to guide the placement of the branch cuts, is proposed. After a complete description of the algorithm and the quality map, several wrapped images are used to examine the effectiveness of the algorithm. Computer simulation and experimental results make it clear that the proposed algorithm works effectively even when a wrapped phase map contains error sources, such as phase discontinuities, noise, and undersampling. (c) 2005 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A new 2-D quality-guided phase-unwrapping algorithm, based on the placement of the branch cuts, is presented. Its framework consists of branch cut placing guided by an original quality map and reliability ordering performed on a final quality map. To improve the noise immunity of the new algorithm, a new quality map, which is used as the original quality map to guide the placement of the branch cuts, is proposed. After a complete description of the algorithm and the quality map, several wrapped images are used to examine the effectiveness of the algorithm. Computer simulation and experimental results make it clear that the proposed algorithm works effectively even when a wrapped phase map contains error sources, such as phase discontinuities, noise, and undersampling. (c) 2005 Society of Photo-Optical Instrumentation Engineers.