950 resultados para Orthogonal polynomial
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
Using normal mode analysis Rayleigh-Taylor instability is investigated for three-layer viscous stratified incompressible steady flow, when the top 3rd and bottom 1st layers extend up to infinity, the middle layer has a small thickness δ. The wave Reynolds number in the middle layer is assumed to be sufficiently small. A dispersion relation (a seventh degree polynomial in wave frequency ω) valid up to the order of the maximal value of all possible Kj (j less-than-or-equals, slant 0, K is the wave number) in each coefficient of the polynomial is obtained. A sufficient condition for instability is found out for the first time, pursuing a medium wavelength analysis. It depends on ratios (α and β) of the coefficients of viscosity, the thickness of the middle layer δ, surface tension ratio T and wave number K. This is a new analytical criterion for Rayleigh-Taylor instability of three-layer fluids. It recovers the results of the corresponding problem for two-layer fluids. Among the results obtained, it is observed that taking the coefficients of viscosity of 2nd and 3rd layers same can inhibit the effect of surface tension completely. For large wave number K, the thickness of the middle layer should be correspondingly small to keep the domain of dependence of the threshold wave number Kc constant for fixed α, β and T.
Resumo:
Boundary-layer transition at different free-stream turbulence levels has been investigated using the particle-image velocimetry technique. The measurements show organized positive and negative fluctuations of the streamwise fluctuating velocity component, which resemble the forward and backward jet-like structures reported in the direct numerical simulation of bypass transition. These fluctuations are associated with unsteady streaky structures. Large inclined high shear-layer regions are also observed and the organized negative fluctuations are found to appear consistently with these inclined shear layers, along with highly inflectional instantaneous streamwise velocity profiles. These inflectional velocity profiles are similar to those in the ribbon-induced boundary-layer transition. An oscillating-inclined shear layer appears to be the turbulent spot-precursor. The measurements also enabled to compare the actual turbulent spot in bypass transition with the simulated one. A proper orthogonal decomposition analysis of the fluctuating velocity field is carried out. The dominant flow structures of the organized positive and negative fluctuations are captured by the first few eigenfunction modes carrying most of the fluctuating energy. The similarity in the dominant eigenfunctions at different Reynolds numbers suggests that the flow prevails its structural identity even in intermittent flows. This analysis also indicates the possibility of the existence of a spatio-temporal symmetry associated with a travelling wave in the flow.
Resumo:
A method of source localization in shallow water, based on subspace concept, is described. It is shown that a vector representing the source in the image space spanned by the direction vectors of the source images is orthogonal to the noise eigenspace of the covariance matrix. Computer simulation has shown that a horizontal array of eight sensors can accurately localize one or more uncorrelated sources in shallow water dominated by multipath propagation.
Resumo:
In Salmonella typhimurium, propionate is oxidized to pyruvate via the 2-methylcitric acid cycle. The last step of this cycle, the cleavage of 2-methylisocitrate to succinate and pyruvate, is catalysed by 2-methylisocitrate lyase (EC 4.1.3.30). Methylisocitrate lyase (molecular weight 32 kDa) with a C-terminal polyhistidine affinity tag has been cloned and overexpressed in Escherichia coli and purified and crystallized under different conditions using the hanging-drop vapour-diffusion technique. Crystals belong to the orthogonal space group P2(1)2(1)2(1), with unit-cell parameters a = 63.600, b = 100.670, c = 204.745 Angstrom. A complete data set to 2.5 Angstrom resolution has been collected using an image-plate detector system mounted on a rotating-anode X-ray generator.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The element-based piecewise smooth functional approximation in the conventional finite element method (FEM) results in discontinuous first and higher order derivatives across element boundaries Despite the significant advantages of the FEM in modelling complicated geometries, a motivation in developing mesh-free methods has been the ease with which higher order globally smooth shape functions can be derived via the reproduction of polynomials There is thus a case for combining these advantages in a so-called hybrid scheme or a `smooth FEM' that, whilst retaining the popular mesh-based discretization, obtains shape functions with uniform C-p (p >= 1) continuity One such recent attempt, a NURBS based parametric bridging method (Shaw et al 2008b), uses polynomial reproducing, tensor-product non-uniform rational B-splines (NURBS) over a typical FE mesh and relies upon a (possibly piecewise) bijective geometric map between the physical domain and a rectangular (cuboidal) parametric domain The present work aims at a significant extension and improvement of this concept by replacing NURBS with DMS-splines (say, of degree n > 0) that are defined over triangles and provide Cn-1 continuity across the triangle edges This relieves the need for a geometric map that could precipitate ill-conditioning of the discretized equations Delaunay triangulation is used to discretize the physical domain and shape functions are constructed via the polynomial reproduction condition, which quite remarkably relieves the solution of its sensitive dependence on the selected knotsets Derivatives of shape functions are also constructed based on the principle of reproduction of derivatives of polynomials (Shaw and Roy 2008a) Within the present scheme, the triangles also serve as background integration cells in weak formulations thereby overcoming non-conformability issues Numerical examples involving the evaluation of derivatives of targeted functions up to the fourth order and applications of the method to a few boundary value problems of general interest in solid mechanics over (non-simply connected) bounded domains in 2D are presented towards the end of the paper
Resumo:
A linear state feedback gain vector used in the control of a single input dynamical system may be constrained because of the way feedback is realized. Some examples of feedback realizations which impose constraints on the gain vector are: static output feedback, constant gain feedback for several operating points of a system, and two-controller feedback. We consider a general class of problems of stabilization of single input dynamical systems with such structural constraints and give a numerical method to solve them. Each of these problems is cast into a problem of solving a system of equalities and inequalities. In this formulation, the coefficients of the quadratic and linear factors of the closed-loop characteristic polynomial are the variables. To solve the system of equalities and inequalities, a continuous realization of the gradient projection method and a barrier method are used under the homotopy framework. Our method is illustrated with an example for each class of control structure constraint.
Resumo:
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csiszar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner bases method to compute an implicit representation of minimum KL-divergence models.
Resumo:
We present a simple proof of Toda′s result (Toda (1989), in "Proceedings, 30th Annual IEEE Symposium on Foundations of Computer Science," pp. 514-519), which states that circled plus P is hard for the Polynomial Hierarchy under randomized reductions. Our approach is circuit-based in the sense that we start with uniform circuit definitions of the Polynomial Hierarchy and apply the Valiant-Vazirani lemma on these circuits (Valiant and Vazirani (1986), Thoeret. Comput. Sci.47, 85-93).
Resumo:
We study the problem of finding a set of constraints of minimum cardinality which when relaxed in an infeasible linear program, make it feasible. We show the problem is NP-hard even when the constraint matrix is totally unimodular and prove polynomial-time solvability when the constraint matrix and the right-hand-side together form a totally unimodular matrix.
Resumo:
Many wormlike micellar systems exhibit appreciable shear thinning due to shear-induced alignment. As the micelles get aligned introducing directionality in the system, the viscoelastic properties are no longer expected to be isotropic. An optical-tweezers-based active microrheology technique enables us to probe the out-of-equilibrium rheological properties of a wormlike micellar system simultaneously along two orthogonal directions-parallel to the applied shear, as well as perpendicular to it. While the displacements of a trapped bead in response to active drag force carry signature of conventional shear thinning, its spontaneous position fluctuations along the perpendicular direction manifest an orthogonal shear thickening, an effect hitherto unobserved. Copyright (C) EPLA, 2010
Resumo:
The long-wavelength hydrodynamics of the Renn-Lubensky twist grain boundary phase with grain boundary angle 2pialpha, alpha irrational, is studied. We find three propagating sound modes, with two of the three sound speeds vanishing for propagation orthogonal to the grains, and one vanishing for propagation parallel to the grains as well. In addition, we find that the viscosities eta1, eta2, eta4, and eta5 diverge like 1/Absolute value of omega as frequency omega --> 0, with the divergent parts DELTAeta(i) satisfying DELTAeta1DELTAeta4=(DELTAeta5)2, exactly. Our results should also apply to the predicted decoupled lamellar phase.
Resumo:
We consider the problem of minimizing the total completion time on a single batch processing machine. The set of jobs to be scheduled can be partitioned into a number of families, where all jobs in the same family have the same processing time. The machine can process at most B jobs simultaneously as a batch, and the processing time of a batch is equal to the processing time of the longest job in the batch. We analyze that properties of an optimal schedule and develop a dynamic programming algorithm of polynomial time complexity when the number of job families is fixed. The research is motivated by the problem of scheduling burn-in ovens in the semiconductor industry
Resumo:
Reinforced concrete corbels have been analysed using the nonlinear finite element method. An elasto-plastic-cracking constitutive formulation using Huber-Hencky-Mises yield surface augmented with a tension cut-off is employed. Smeared-fixed cracking with mesh-dependent strain softening is employed to obtain objective results. Multiple non-orthogonal cracking and opening and closing of cracks are permitted. The model and the formulation are verified with respect to available numerical solution for an RC corbel. Results of analyses of nine reinforced concrete corbels are presented and compared with experimental results. Nonlinear finite element analysis of reinforced concrete structures is shown to be a complement and also a feasible alternative to laboratory testing.