989 resultados para Term Splitting Algorithm
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
In the world of transport management, the term ‘anticipation’ is gradually replacing ‘reaction’. Indeed, the ability to forecast traffic evolution in a network should ideally form the basis for many traffic management strategies and multiple ITS applications. Real-time prediction capabilities are therefore becoming a concrete need for the management of networks, both for urban and interurban environments, and today’s road operator has increasingly complex and exacting requirements. Recognising temporal patterns in traffic or the manner in which sequential traffic events evolve over time have been important considerations in short-term traffic forecasting. However, little work has been conducted in the area of identifying or associating traffic pattern occurrence with prevailing traffic conditions. This paper presents a framework for detection pattern identification based on finite mixture models using the EM algorithm for parameter estimation. The computation results have been conducted taking into account the traffic data available in an urban network.
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
Planning is a vital element of project management but it is still not recognized as a process variable. Its objective should be to outperform the initially defined processes, and foresee and overcome possible undesirable events. Detailed task-level master planning is unrealistic since one cannot accurately predict all the requirements and obstacles before work has even started. The process planning methodology (PPM) has thus been developed in order to overcome common problems of the overwhelming project complexity. The essential elements of the PPM are the process planning group (PPG), including a control team that dynamically links the production/site and management, and the planning algorithm embodied within two continuous-improvement loops. The methodology was tested on a factory project in Slovenia and in four successive projects of a similar nature. In addition to a number of improvement ideas and enhanced communication, the applied PPM resulted in 32% higher total productivity, 6% total savings and created a synergistic project environment.
Resumo:
We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 Angstrom with an optimised hydrophobic term in the fitness function. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
An efficient algorithm is presented for the solution of the equations of isentropic gas dynamics with a general convex gas law. The scheme is based on solving linearized Riemann problems approximately, and in more than one dimension incorporates operator splitting. In particular, only two function evaluations in each computational cell are required. The scheme is applied to a standard test problem in gas dynamics for a polytropic gas
Resumo:
An efficient algorithm based on flux difference splitting is presented for the solution of the two-dimensional shallow water equations in a generalised coordinate system. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The scheme has good jump capturing properties and the advantage of using body-fitted meshes. Numerical results are shown for flow past a circular obstruction.
Resumo:
An efficient algorithm based on flux difference splitting is presented for the solution of the three-dimensional equations of isentropic flow in a generalised coordinate system, and with a general convex gas law. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The algorithm requires only one function evaluation of the gas law in each computational cell. The scheme has good shock capturing properties and the advantage of using body-fitted meshes. Numerical results are shown for Mach 3 flow of air past a circular cylinder. Furthermore, the algorithm also applies to shallow water flows by employing the familiar gas dynamics analogy.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields uj, j = 1, ..., n of sound sources supported in different bounded domains G1, ..., Gn in from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions , to construct uℓ for ℓ = 1, ..., n from u|Λ in the form We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
Innumerous protocols, using the mouse embryonic stem (ES) cells as model for in vitro study of neurons functional properties and features, have been developed. Most of these protocols are short lasting, which, therefore, does not allow a careful analysis of the neurons maturation, aging, and death processes. We describe here a novel and efficient long-lasting protocol for in vitro ES cells differentiation into neuronal cells. It consists of obtaining embryoid bodies, followed by induction of neuronal differentiation with retinoic acid of nonadherent embryoid bodies (three-dimensional model), which further allows their adherence and formation of adherent neurospheres (AN, bi-dimensional model). The AN can be maintained for at least 12 weeks in culture under repetitive mechanical splitting, providing a constant microenvironment (in vitro niche) for the neuronal progenitor cells avoiding mechanical dissociation of AN. The expression of neuron-specific proteins, such as nestin, sox1, beta III-tubulin, microtubule-associated protein 2, neurofilament medium protein, Tau, neuronal nuclei marker, gamma-aminobutyric acid, and 5-hydroxytryptamine, were confirmed in these cells maintained during 3 months under several splitting. Additionally, expression pattern of microtubule-associated proteins, such as lissencephaly (Lis1) and nuclear distribution element-like (Ndel1), which were shown to be essential for differentiation and migration of neurons during embryogenesis, was also studied. As expected, both proteins were expressed in undifferentiated ES cells, AN, and nonrosette neurons, although presenting different spatial distribution in AN. In contrast to previous studies, using cultured neuronal cells derived from embryonic and adult tissues, only Ndel1 expression was observed in the centrosome region of early neuroblasts from AN. Mature neurons, obtained from ES cells in this work, display ionic channels and oscillations of membrane electrical potential typical of electrically excitable cells, which is a characteristic feature of the functional central nervous system (CNS) neurons. Taken together, our study demonstrated that AN are a long-term culture of neuronal cells that can be used to analyze the process of neuronal differentiation dynamics. Thus, the protocol described here provides a new experimental model for studying neurological diseases associated with neuronal differentiation during early development, as well as it represents a novel source of functional cells that can be used as tools for testing the effects of toxins and/or drugs on neuronal cells.
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.