895 resultados para Non-commutative Landau problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, two methods for constructing systems of ordinary differential equations realizing any fixed finite set of equilibria in any fixed finite dimension are introduced; no spurious equilibria are possible for either method. By using the first method, one can construct a system with the fewest number of equilibria, given a fixed set of attractors. Using a strict Lyapunov function for each of these differential equations, a large class of systems with the same set of equilibria is constructed. A method of fitting these nonlinear systems to trajectories is proposed. In addition, a general method which will produce an arbitrary number of periodic orbits of shapes of arbitrary complexity is also discussed. A more general second method is given to construct a differential equation which converges to a fixed given finite set of equilibria. This technique is much more general in that it allows this set of equilibria to have any of a large class of indices which are consistent with the Morse Inequalities. It is clear that this class is not universal, because there is a large class of additional vector fields with convergent dynamics which cannot be constructed by the above method. The easiest way to see this is to enumerate the set of Morse indices which can be obtained by the above method and compare this class with the class of Morse indices of arbitrary differential equations with convergent dynamics. The former set of indices are a proper subclass of the latter, therefore, the above construction cannot be universal. In general, it is a difficult open problem to construct a specific example of a differential equation with a given fixed set of equilibria, permissible Morse indices, and permissible connections between stable and unstable manifolds. A strict Lyapunov function is given for this second case as well. This strict Lyapunov function as above enables construction of a large class of examples consistent with these more complicated dynamics and indices. The determination of all the basins of attraction in the general case for these systems is also difficult and open.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper deals with the determination of an optimal schedule for the so-called mixed shop problem when the makespan has to be minimized. In such a problem, some jobs have fixed machine orders (as in the job-shop), while the operations of the other jobs may be processed in arbitrary order (as in the open-shop). We prove binary NP-hardness of the preemptive problem with three machines and three jobs (two jobs have fixed machine orders and one may have an arbitrary machine order). We answer all other remaining open questions on the complexity status of mixed-shop problems with the makespan criterion by presenting different polynomial and pseudopolynomial algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this past decade finite volume (FV) methods have increasingly been used for the solution of solid mechanics problems. This contribution describes a cell vertex finite volume discretisation approach to the solution of geometrically nonlinear (GNL) problems. These problems, which may well have linear material properties, are subject to large deformation. This requires a distinct formulation, which is described in this paper together with the solution strategy for GNL problem. The competitive performance for this procedure against the conventional finite element (FE) formulation is illustrated for a three dimensional axially loaded column.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scheduling problem of minimizing the makespan for m parallel dedicated machines under single resource constraints is considered. For different variants of the problem the complexity status is established. Heuristic algorithms employing the so-called group technology approach are presented and their worst-case behavior is examined. Finally, a polynomial time approximation scheme is presented for the problem with fixed number of machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a two-machine open shop scheduling problem, in which the machines are not continuously available for processing. No preemption is allowed in the processing of any operation. The objective is to minimize the makespan. We consider approximability issues of the problem with more than one non-availability intervals and present an approximation algorithm with a worst-case ratio of 4/3 for the problem with a single non-availability interval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vertex-based finite volume (FV) method is presented for the computational solution of quasi-static solid mechanics problems involving material non-linearity and infinitesimal strains. The problems are analysed numerically with fully unstructured meshes that consist of a variety of two- and threedimensional element types. A detailed comparison between the vertex-based FV and the standard Galerkin FE methods is provided with regard to discretization, solution accuracy and computational efficiency. For some problem classes a direct equivalence of the two methods is demonstrated, both theoretically and numerically. However, for other problems some interesting advantages and disadvantages of the FV formulation over the Galerkin FE method are highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a knapsack problem to minimize a symmetric quadratic function. We demonstrate that this symmetric quadratic knapsack problem is relevant to two problems of single machine scheduling: the problem of minimizing the weighted sum of the completion times with a single machine non-availability interval under the non-resumable scenario; and the problem of minimizing the total weighted earliness and tardiness with respect to a common small due date. We develop a polynomial-time approximation algorithm that delivers a constant worst-case performance ratio for a special form of the symmetric quadratic knapsack problem. We adapt that algorithm to our scheduling problems and achieve a better performance. For the problems under consideration no fixed-ratio approximation algorithms have been previously known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a simple approach to the so-called frame problem based on some ordinary set operations, which does not require non-monotonic reasoning. Following the notion of the situation calculus, we shall represent a state of the world as a set of fluents, where a fluent is simply a Boolean-valued property whose truth-value is dependent on the time. High-level causal laws are characterised in terms of relationships between actions and the involved world states. An effect completion axiom is imposed on each causal law, which guarantees that all the fluents that can be affected by the performance of the corresponding action are always totally governed. It is shown that, compared with other techniques, such a set operation based approach provides a simpler and more effective treatment to the frame problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this note, we consider the scheduling problem of minimizing the sum of the weighted completion times on a single machine with one non-availability interval on the machine under the non-resumable scenario. Together with a recent 2-approximation algorithm designed by Kacem [I. Kacem, Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval, Computers & Industrial Engineering 54 (2008) 401–410], this paper is the first successful attempt to develop a constant ratio approximation algorithm for this problem. We present two approaches to designing such an algorithm. Our best algorithm guarantees a worst-case performance ratio of 2+ε. © 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aminolevulinic acid (ALA) stability within topical formulations intended for photodynamic therapy (PDT) is poor due to dimerisation to pyrazine-2,5-dipropionic acid (PY). Most strategies to improve stability use low pH vehicles, which can cause cutaneous irritancy. To overcome this problem, a novel approach is investigated that uses a non-aqueous vehicle to retard proton-induced charge separation across the 4-carbonyl group on ALA and lessen nucleophilic attack that leads to condensation dimerisation. Bioadhesive anhydrous vehicles based on methylvinylether-maleic anhydride copolymer patches and poly(ethyleneglycol) or glycerol thickened poly(acrylic acid) gels were formulated. ALA stability fell below pharmaceutically acceptable levels after 6 months, with bioadhesive patches stored at 5°C demonstrating the best stability by maintaining 86.2% of their original loading. Glycerol-based gels maintained 40.2% in similar conditions. However, ALA loss did not correspond to expected increases in PY, indicating the presence of another degradative process that prevented dimerisation. Nuclear magnetic resonance (NMR) analysis was inconclusive in respect of the mechanism observed in the patch system, but showed clearly that an esterification reaction involving ALA and both glycerol and poly(ethyleneglycol) was occurring. This was especially marked in the glycerol gels, where only 2.21% of the total expected PY was detected after 204 days at 5°C. Non-specific esterase hydrolysis demonstrated that ALA was recoverable from the gel systems, further supporting esterified binding within the gel matrices. It is conceivable that skin esterases could duplicate this finding upon topical application of the gel and convert these derivatives back to ALA in situ, provided skin penetration is not affected adversely.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for introducing correlations between electrons and ions that is computationally affordable is described. The central assumption is that the ionic wavefunctions are narrow, which makes possible a moment expansion for the full density matrix. To make the problem tractable we reduce the remaining many-electron problem to a single-electron problem by performing a trace over all electronic degrees of freedom except one. This introduces both one- and two-electron quantities into the equations of motion. Quantities depending on more than one electron are removed by making a Hartree-Fock approximation. Using the first-moment approximation, we perform a number of tight binding simulations of the effect of an electric current on a mobile atom. The classical contribution to the ionic kinetic energy exhibits cooling and is independent of the bias. The quantum contribution exhibits strong heating, with the heating rate proportional to the bias. However, increased scattering of electrons with increasing ionic kinetic energy is not observed. This effect requires the introduction of the second moment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to the Mickael's selection theorem any surjective continuous linear operator from one Fr\'echet space onto another has a continuous (not necessarily linear) right inverse. Using this theorem Herzog and Lemmert proved that if $E$ is a Fr\'echet space and $T:E\to E$ is a continuous linear operator such that the Cauchy problem $\dot x=Tx$, $x(0)=x_0$ is solvable in $[0,1]$ for any $x_0\in E$, then for any $f\in C([0,1],E)$, there exists a continuos map $S:[0,1]\times E\to E$, $(t,x)\mapsto S_tx$ such that for any $x_0\in E$, the function $x(t)=S_tx_0$ is a solution of the Cauchy problem $\dot x(t)=Tx(t)+f(t)$, $x(0)=x_0$ (they call $S$ a fundamental system of solutions of the equation $\dot x=Tx+f$). We prove the same theorem, replacing "continuous" by "sequentially continuous" for locally convex spaces from a class which contains strict inductive limits of Fr\'echet spaces and strong duals of Fr\'echet--Schwarz spaces and is closed with respect to finite products and sequentially closed subspaces. The key-point of the proof is an extension of the theorem on existence of a sequentially continuous right inverse of any surjective sequentially continuous linear operator to some class of non-metrizable locally convex spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.