956 resultados para Log penalty
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
The crystal structure, thermal expansion and electrical conductivity of the solid solution Nd0.7Sr0.3Fe1-xCoxO3 for 0 less than or equal to x less than or equal to 0.8 were investigated. All compositions had the GdFeO3-type orthorhombic perovskite structure. The lattice parameters were determined at room temperature by X-ray powder diffraction (XRPD). The pseudo-cubic lattice constant decreased continuously with x. The average linear thermal expansion coefficient (TEC) in the temperature range from 573 to 973 K was found to increase with x. The thermal expansion curves for all values of x displayed rapid increase in slope at high temperatures. The electrical conductivity increased with x for the entire temperature range of measurement. The calculated activation energy values indicate that electrical conduction takes place primarily by the small polaron hopping mechanism. The charge compensation for the divalent ion on the A-site is provided by the formation of Fe4+ ions on the B-site (in preference to Co4+ ions) and vacancies on the oxygen sublattice for low values of x. The large increase in the conductivity with x in the range from 0.6 to 0.8 is attributed to the substitution of Fe4+ ions by Co4+ ions. The Fe site has a lower small polaron site energy than Co and hence behaves like a carrier trap, thereby drastically reducing the conductivity. The non-linear behaviour in the dependence of log sigmaT with reciprocal temperature can be attributed to the generation of additional charge carriers with increasing temperature by the charge disproportionation of Co3+ ions. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
We prove a lower bound of Omega(1/epsilon (m + log(d - a)) where a = [log(m) (1/4epsilon)] for the hitting set size for combinatorial rectangles of volume at least epsilon in [m](d) space, for epsilon is an element of [m(-(d-2)), 2/9] and d > 2. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Using a hot wire in a turbulent boundary layer in air, an experimental study has been made of the frequent periods of activity (to be called ‘bursts’) noticed in a turbulent signal that has been passed through a narrow band-pass filter. Although definitive identification of bursts presents difficulties, it is found that a reasonable characteristic value for the mean interval between such bursts is consistent, at the same Reynolds number, with the mean burst periods measured by Kline et al. (1967), using hydrogen-bubble techniques in water. However, data over the wider Reynolds number range covered here show that, even in the wall or inner layer, the mean burst period scales with outer rather than inner variables; and that the intervals are distributed according to the log normal law. It is suggested that these ‘bursts’ are to be identified with the ‘spottiness’ of Landau & Kolmogorov, and the high-frequency intermittency observed by Batchelor & Townsend. It is also concluded that the dynamics of the energy balance in a turbulent boundary layer can be understood only on the basis of a coupling between the inner and outer layers.
Resumo:
A constant-pressure axisymmetric turbulent boundary layer along a circular cylinder of radius a is studied at large values of the frictional Reynolds number a+ (based upon a) with the boundary-layer thickness δ of order a. Using the equations of mean motion and the method of matched asymptotic expansions, it is shown that the flow can be described by the same two limit processes (inner and outer) as are used in two-dimensional flow. The condition that the two expansions match requires the existence, at the lowest order, of a log region in the usual two-dimensional co-ordinates (u+, y+). Examination of available experimental data shows that substantial log regions do in fact exist but that the intercept is possibly not a universal constant. Similarly, the solution in the outer layer leads to a defect law of the same form as in two-dimensional flow; experiment shows that the intercept in the defect law depends on δ/a. It is concluded that, except in those extreme situations where a+ is small (in which case the boundary layer may not anyway be in a fully developed turbulent state), the simplest analysis of axisymmetric flow will be to use the two-dimensional laws with parameters that now depend on a+ or δ/a as appropriate.
Resumo:
In the present work, we study the transverse vortex-induced vibrations of an elastically mounted rigid cylinder in a fluid flow. We employ a technique to accurately control the structural damping, enabling the system to take on both negative and positive damping. This permits a systematic study of the effects of system mass and damping on the peak vibration response. Previous experiments over the last 30 years indicate a large scatter in peak-amplitude data ($A^*$) versus the product of mass–damping ($\alpha$), in the so-called ‘Griffin plot’. A principal result in the present work is the discovery that the data collapse very well if one takes into account the effect of Reynolds number ($\mbox{\textit{Re}}$), as an extra parameter in a modified Griffin plot. Peak amplitudes corresponding to zero damping ($A^*_{{\alpha}{=}0}$), for a compilation of experiments over a wide range of $\mbox{\textit{Re}}\,{=}\,500-33000$, are very well represented by the functional form $A^*_{\alpha{=}0} \,{=}\, f(\mbox{\textit{Re}}) \,{=}\, \log(0.41\,\mbox{\textit{Re}}^{0.36}$). For a given $\mbox{\textit{Re}}$, the amplitude $A^*$ appears to be proportional to a function of mass–damping, $A^*\propto g(\alpha)$, which is a similar function over all $\mbox{\textit{Re}}$. A good best-fit for a wide range of mass–damping and Reynolds number is thus given by the following simple expression, where $A^*\,{=}\, g(\alpha)\,f(\mbox{\textit{Re}})$: \[ A^* \,{=}\,(1 - 1.12\,\alpha + 0.30\,\alpha^2)\,\log (0.41\,\mbox{\textit{Re}}^{0.36}). \] In essence, by using a renormalized parameter, which we define as the ‘modified amplitude’, $A^*_M\,{=}\,A^*/A^*_{\alpha{=}0}$, the previously scattered data collapse very well onto a single curve, $g(\alpha)$, on what we refer to as the ‘modified Griffin plot’. There has also been much debate over the last three decades concerning the validity of using the product of mass and damping (such as $\alpha$) in these problems. Our results indicate that the combined mass–damping parameter ($\alpha$) does indeed collapse peak-amplitude data well, at a given $\mbox{\textit{Re}}$, independent of the precise mass and damping values, for mass ratios down to $m^*\,{=}\,1$.
Resumo:
In this study, we investigated nonlinear measures of chaos of QT interval time series in 28 normal control subjects, 36 patients with panic disorder and 18 patients with major depression in supine and standing postures. We obtained the minimum embedding dimension (MED) and the largest Lyapunov exponent (LLE) of instantaneous heart rate (HR) and QT interval series. MED quantifies the system's complexity and LLE predictability. There was a significantly lower MED and a significantly increased LLE of QT interval time series in patients. Most importantly, nonlinear indices of QT/HR time series, MEDqthr (MED of QT/HR) and LLEqthr (LLE of QT/HR), were highly significantly different between controls and both patient groups in either posture. Results remained the same even after adjusting for age. The increased LLE of QT interval time, series in patients with anxiety and depression is in line with our previous findings of higher QTvi (QT variability index, a log ratio of QT variability corrected for mean QT squared divided by heart rate variability corrected for mean heart rate squared) in these patients, using linear techniques. Increased LLEqthr (LLE of QT/HR) may be a more sensitive tool to study cardiac repolarization and a valuable addition to the time domain measures such as QTvi. This is especially important in light of the finding that LLEqthr correlated poorly and nonsignificantly with QTvi. These findings suggest an increase in relative cardiac sympathetic activity and a decrease in certain aspects of cardiac vagal function in patients with anxiety as well as depression. The lack of correlation between QTvi and LLEqthr suggests that this nonlinear index is a valuable addition to the linear measures. These findings may also help to explain the higher incidence of cardiovascular mortality in patients with anxiety and depressive disorders. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
We consider a dense ad hoc wireless network comprising n nodes confined to a given two dimensional region of fixed area. For the Gupta-Kumar random traffic model and a realistic interference and path loss model (i.e., the channel power gains are bounded above, and are bounded below by a strictly positive number), we study the scaling of the aggregate end-to-end throughput with respect to the network average power constraint, P macr, and the number of nodes, n. The network power constraint P macr is related to the per node power constraint, P macr, as P macr = np. For large P, we show that the throughput saturates as Theta(log(P macr)), irrespective of the number of nodes in the network. For moderate P, which can accommodate spatial reuse to improve end-to-end throughput, we observe that the amount of spatial reuse feasible in the network is limited by the diameter of the network. In fact, we observe that the end-to-end path loss in the network and the amount of spatial reuse feasible in the network are inversely proportional. This puts a restriction on the gains achievable using the cooperative communication techniques studied in and, as these rely on direct long distance communication over the network.
Resumo:
We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1
Resumo:
The boxicity of a graph H, denoted by box(H), is the minimum integer k such that H is an intersection graph of axis-parallel k-dimensional boxes in R(k). In this paper we show that for a line graph G of a multigraph, box(G) <= 2 Delta (G)(inverted right perpendicularlog(2) log(2) Delta(G)inverted left perpendicular + 3) + 1, where Delta(G) denotes the maximum degree of G. Since G is a line graph, Delta(G) <= 2(chi (G) - 1), where chi (G) denotes the chromatic number of G, and therefore, box(G) = 0(chi (G) log(2) log(2) (chi (G))). For the d-dimensional hypercube Q(d), we prove that box(Q(d)) >= 1/2 (inverted right perpendicularlog(2) log(2) dinverted left perpendicular + 1). The question of finding a nontrivial lower bound for box(Q(d)) was left open by Chandran and Sivadasan in [L. Sunil Chandran, Naveen Sivadasan, The cubicity of Hypercube Graphs. Discrete Mathematics 308 (23) (2008) 5795-5800]. The above results are consequences of bounds that we obtain for the boxicity of a fully subdivided graph (a graph that can be obtained by subdividing every edge of a graph exactly once). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.
Resumo:
In this paper, we consider the problem of association of wireless stations (STAs) with an access network served by a wireless local area network (WLAN) and a 3G cellular network. There is a set of WLAN Access Points (APs) and a set of 3G Base Stations (BSs) and a number of STAs each of which needs to be associated with one of the APs or one of the BSs. We concentrate on downlink bulk elastic transfers. Each association provides each ST with a certain transfer rate. We evaluate an association on the basis of the sum log utility of the transfer rates and seek the utility maximizing association. We also obtain the optimal time scheduling of service from a 3G BS to the associated STAs. We propose a fast iterative heuristic algorithm to compute an association. Numerical results show that our algorithm converges in a few steps yielding an association that is within 1% (in objective value) of the optimal (obtained through exhaustive search); in most cases the algorithm yields an optimal solution.
Resumo:
The encapsulation of probiotic Lactobacillus acidophilus through layer-by-layer self-assembly of polyelectrolytes (PE) chitosan (CHI) and carboxymethyl cellulose (CMC) has been investigated,to enhance its survival m adverse conditions encountered in the GI tract The survival of encapsulated cells in simulated gastric (SGF) and intestinal fluids (SIF) is significant when compared to nonencapsulated cells On sequential exposure to SGF and SIF fox 120 nun, almost complete death of free cells is observed However, for cells coated with three nanolayers of PEs (CHI/CMC/CHI) about 33 log % of the cells (6 log cfu/500 mg) survived under the same conditions The enhanced survival rate of encapsulated L acidophilus can be attributed to the impermeability of polyelectrolyte nanolayers to large enzyme molecules like pepsin, and pancreatin that cause proteolysis and to the stability of the polyelectrolyte nanolayers in gastric and intestinal pH The PE coating also serves to reduce viability losses during freezing and freeze- drying About 73 and 92 log % of uncoated and coated cells survived after freeze:drying, and the losses occurring between freezing and freeze-drying were found to be lower for coated cells
Resumo:
The solubility of oxygen in liquid gallium in the temperature range 775 –1125 °C and in liquid gallium-copper alloys at 1100 °C, in equilibrium with β-Ga2O3, has been measured by an isopiestic equilibrium technique. The solubility of oxygen in pure gallium is given by the equation log (at.% O) = −7380/T + 4.264 (±0.03) Using recently measured values for the standard free energy of formation of β-Ga2O3 and assuming that oxygen obeys Sievert's law up to the saturation limit, the standard free energy of solution of oxygen in liquid gallium may be calculated : View the MathML sourceΔ°298 = −52 680 + 6.53T (±200) cal where the standard state for dissolved oxygen is an infinitely dilute solution in which the activity is equal to atomic per cent. The effect of copper on the activity of oxygen dissolved in liquid gallium is found to be in good agreement with that predicted by a recent quasichemical model in which it was assumed that each oxygen is interstitially coordinated to four metal atoms and that the nearest neighbour metal atoms lose approximately half their metallic cohesive energies.