854 resultados para Gradient descent algorithms
Resumo:
Cremona developed a reduction theory for binary forms of degree 3 and 4 with integer coefficients, the motivation in the case of quartics being to improve 2-descent algorithms for elliptic curves over Q. In this paper we extend some of these results to forms of higher degree. One application of this is to the study of hyperelliptic curves.
Resumo:
Il riconoscimento delle condizioni del manto stradale partendo esclusivamente dai dati raccolti dallo smartphone di un ciclista a bordo del suo mezzo è un ambito di ricerca finora poco esplorato. Per lo sviluppo di questa tesi è stata sviluppata un'apposita applicazione, che combinata a script Python permette di riconoscere differenti tipologie di asfalto. L’applicazione raccoglie i dati rilevati dai sensori di movimento integrati nello smartphone, che registra i movimenti mentre il ciclista è alla guida del suo mezzo. Lo smartphone è fissato in un apposito holder fissato sul manubrio della bicicletta e registra i dati provenienti da giroscopio, accelerometro e magnetometro. I dati sono memorizzati su file CSV, che sono elaborati fino ad ottenere un unico DataSet contenente tutti i dati raccolti con le features estratte mediante appositi script Python. A ogni record sarà assegnato un cluster deciso in base ai risultati prodotti da K-means, risultati utilizzati in seguito per allenare algoritmi Supervised. Lo scopo degli algoritmi è riconoscere la tipologia di manto stradale partendo da questi dati. Per l’allenamento, il DataSet è stato diviso in due parti: il training set dal quale gli algoritmi imparano a classificare i dati e il test set sul quale gli algoritmi applicano ciò che hanno imparato per dare in output la classificazione che ritengono idonea. Confrontando le previsioni degli algoritmi con quello che i dati effettivamente rappresentano si ottiene la misura dell’accuratezza dell’algoritmo.
Resumo:
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/similar to egbirgin/tango/.
Resumo:
This paper presents vectorized methods of construction and descent of quadtrees that can be easily adapted to message passing parallel computing. A time complexity analysis for the present approach is also discussed. The proposed method of tree construction requires a hash table to index nodes of a linear quadtree in the breadth-first order. The hash is performed in two steps: an internal hash to index child nodes and an external hash to index nodes in the same level (depth). The quadtree descent is performed by considering each level as a vector segment of a linear quadtree, so that nodes of the same level can be processed concurrently. © 2012 Springer-Verlag.
Resumo:
The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.
Resumo:
This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV
Resumo:
The TGF-β homolog Decapentaplegic (Dpp) acts as a secreted morphogen in the Drosophila wing disc, and spreads through the target tissue in order to form a long range concentration gradient. Despite extensive studies, the mechanism by which the Dpp gradient is formed remains controversial. Two opposing mechanisms have been proposed: receptor-mediated transcytosis (RMT) and restricted extracellular diffusion (RED). In these scenarios the receptor for Dpp plays different roles. In the RMT model it is essential for endocytosis, re-secretion, and thus transport of Dpp, whereas in the RED model it merely modulates Dpp distribution by binding it at the cell surface for internalization and subsequent degradation. Here we analyzed the effect of receptor mutant clones on the Dpp profile in quantitative mathematical models representing transport by either RMT or RED. We then, using novel genetic tools, experimentally monitored the actual Dpp gradient in wing discs containing receptor gain-of-function and loss-of-function clones. Gain-of-function clones reveal that Dpp binds in vivo strongly to the type I receptor Thick veins, but not to the type II receptor Punt. Importantly, results with the loss-of-function clones then refute the RMT model for Dpp gradient formation, while supporting the RED model in which the majority of Dpp is not bound to Thick veins. Together our results show that receptor-mediated transcytosis cannot account for Dpp gradient formation, and support restricted extracellular diffusion as the main mechanism for Dpp dispersal. The properties of this mechanism, in which only a minority of Dpp is receptor-bound, may facilitate long-range distribution.
Resumo:
When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose.
Resumo:
This work focuses on the prediction of the two main nitrogenous variables that describe the water quality at the effluent of a Wastewater Treatment Plant. We have developed two kind of Neural Networks architectures based on considering only one output or, in the other hand, the usual five effluent variables that define the water quality: suspended solids, biochemical organic matter, chemical organic matter, total nitrogen and total Kjedhal nitrogen. Two learning techniques based on a classical adaptative gradient and a Kalman filter have been implemented. In order to try to improve generalization and performance we have selected variables by means genetic algorithms and fuzzy systems. The training, testing and validation sets show that the final networks are able to learn enough well the simulated available data specially for the total nitrogen
Resumo:
PURPOSE: To improve the traditional Nyquist ghost correction approach in echo planar imaging (EPI) at high fields, via schemes based on the reversal of the EPI readout gradient polarity for every other volume throughout a functional magnetic resonance imaging (fMRI) acquisition train. MATERIALS AND METHODS: An EPI sequence in which the readout gradient was inverted every other volume was implemented on two ultrahigh-field systems. Phantom images and fMRI data were acquired to evaluate ghost intensities and the presence of false-positive blood oxygenation level-dependent (BOLD) signal with and without ghost correction. Three different algorithms for ghost correction of alternating readout EPI were compared. RESULTS: Irrespective of the chosen processing approach, ghosting was significantly reduced (up to 70% lower intensity) in both rat brain images acquired on a 9.4T animal scanner and human brain images acquired at 7T, resulting in a reduction of sources of false-positive activation in fMRI data. CONCLUSION: It is concluded that at high B(0) fields, substantial gains in Nyquist ghost correction of echo planar time series are possible by alternating the readout gradient every other volume.
Resumo:
Contemporary coronary magnetic resonance angiography techniques suffer from signal-to-noise ratio (SNR) constraints. We propose a method to enhance SNR in gradient echo coronary magnetic resonance angiography by using sensitivity encoding (SENSE). While the use of sensitivity encoding to improve SNR seems counterintuitive, it can be exploited by reducing the number of radiofrequency excitations during the acquisition window while lowering the signal readout bandwidth, therefore improving the radiofrequency receive to radiofrequency transmit duty cycle. Under certain conditions, this leads to improved SNR. The use of sensitivity encoding for improved SNR in three-dimensional coronary magnetic resonance angiography is investigated using numerical simulations and an in vitro and an in vivo study. A maximum 55% SNR enhancement for coronary magnetic resonance angiography was found both in vitro and in vivo, which is well consistent with the numerical simulations. This method is most suitable for spoiled gradient echo coronary magnetic resonance angiography in which a high temporal and spatial resolution is required.
Resumo:
Concentration gradients provide spatial information for tissue patterning and cell organization, and their robustness under natural fluctuations is an evolutionary advantage. In rod-shaped Schizosaccharomyces pombe cells, the DYRK-family kinase Pom1 gradients control cell division timing and placement. Upon dephosphorylation by a Tea4-phosphatase complex, Pom1 associates with the plasma membrane at cell poles, where it diffuses and detaches upon auto-phosphorylation. Here, we demonstrate that Pom1 auto-phosphorylates intermolecularly, both in vitro and in vivo, which confers robustness to the gradient. Quantitative imaging reveals this robustness through two system's properties: The Pom1 gradient amplitude is inversely correlated with its decay length and is buffered against fluctuations in Tea4 levels. A theoretical model of Pom1 gradient formation through intermolecular auto-phosphorylation predicts both properties qualitatively and quantitatively. This provides a telling example where gradient robustness through super-linear decay, a principle hypothesized a decade ago, is achieved through autocatalysis. Concentration-dependent autocatalysis may be a widely used simple feedback to buffer biological activities.
Resumo:
A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.
Resumo:
Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.