935 resultados para super-dense computation
Resumo:
We announce the discovery of the transiting planet CoRoT-13b. Ground-based follow-up in CFHT and IAC80 confirmed CoRoT's observations. The mass of the planet was measured with the HARPS spectrograph and the properties of the host star were obtained analyzing HIRES spectra from the Keck telescope. It is a hot Jupiter-like planet with an orbital period of 4.04 days, 1.3 Jupiter masses, 0.9 Jupiter radii, and a density of 2.34 g cm(-3). It orbits a G0V star with T(eff) = 5 945 K, M(*) = 1.09 M(circle dot), R(*) = 1.01 R(circle dot), solar metallicity, a lithium content of +1.45 dex, and an estimated age of between 0.12 and 3.15 Gyr. The lithium abundance of the star is consistent with its effective temperature, activity level, and age range derived from the stellar analysis. The density of the planet is extreme for its mass, implies that heavy elements are present with a mass of between about 140 and 300 M(circle plus).
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
Tässä diplomityössä tutkitaan dispariteettikartan laskennan tehostamista interpoloimalla. Kolmiomittausta käyttämällä stereokuvasta muodostetaan ensin harva dispariteettikartta, jonka jälkeen koko kuvan kattava dispariteettikartta muodostetaan interpoloimalla. Kolmiomittausta varten täytyy tietää samaa reaalimaailman pistettä vastaavat kuvapisteet molemmissa kameroissa. Huolimatta siitä, että vastaavien pisteiden hakualue voidaan pienentää kahdesta ulottuvuudesta yhteen ulottuvuuteen käyttämällä esimerkiksi epipolaarista geometriaa, on laskennallisesti tehokkaampaa määrittää osa dispariteetikartasta interpoloimalla, kuin etsiä vastaavia kuvapisteitä stereokuvista. Myöskin johtuen stereonäköjärjestelmän kameroiden välisestä etäisyydestä, kaikki kuvien pisteet eivät löydy toisesta kuvasta. Näin ollen on mahdotonta määrittää koko kuvan kattavaa dispariteettikartaa pelkästään vastaavista pisteistä. Vastaavien pisteiden etsimiseen tässä työssä käytetään dynaamista ohjelmointia sekä korrelaatiomenetelmää. Reaalimaailman pinnat ovat yleisesti ottaen jatkuvia, joten geometrisessä mielessä on perusteltua approksimoida kuvien esittämiä pintoja interpoloimalla. On myöskin olemassa tieteellistä näyttöä, jonkamukaan ihmisen stereonäkö interpoloi objektien pintoja.
Resumo:
L'objectif de cette thèse est de présenter différentes applications du programme de recherche de calcul conditionnel distribué. On espère que ces applications, ainsi que la théorie présentée ici, mènera à une solution générale du problème d'intelligence artificielle, en particulier en ce qui a trait à la nécessité d'efficience. La vision du calcul conditionnel distribué consiste à accélérer l'évaluation et l'entraînement de modèles profonds, ce qui est très différent de l'objectif usuel d'améliorer sa capacité de généralisation et d'optimisation. Le travail présenté ici a des liens étroits avec les modèles de type mélange d'experts. Dans le chapitre 2, nous présentons un nouvel algorithme d'apprentissage profond qui utilise une forme simple d'apprentissage par renforcement sur un modèle d'arbre de décisions à base de réseau de neurones. Nous démontrons la nécessité d'une contrainte d'équilibre pour maintenir la distribution d'exemples aux experts uniforme et empêcher les monopoles. Pour rendre le calcul efficient, l'entrainement et l'évaluation sont contraints à être éparse en utilisant un routeur échantillonnant des experts d'une distribution multinomiale étant donné un exemple. Dans le chapitre 3, nous présentons un nouveau modèle profond constitué d'une représentation éparse divisée en segments d'experts. Un modèle de langue à base de réseau de neurones est construit à partir des transformations éparses entre ces segments. L'opération éparse par bloc est implémentée pour utilisation sur des cartes graphiques. Sa vitesse est comparée à deux opérations denses du même calibre pour démontrer le gain réel de calcul qui peut être obtenu. Un modèle profond utilisant des opérations éparses contrôlées par un routeur distinct des experts est entraîné sur un ensemble de données d'un milliard de mots. Un nouvel algorithme de partitionnement de données est appliqué sur un ensemble de mots pour hiérarchiser la couche de sortie d'un modèle de langage, la rendant ainsi beaucoup plus efficiente. Le travail présenté dans cette thèse est au centre de la vision de calcul conditionnel distribué émis par Yoshua Bengio. Elle tente d'appliquer la recherche dans le domaine des mélanges d'experts aux modèles profonds pour améliorer leur vitesse ainsi que leur capacité d'optimisation. Nous croyons que la théorie et les expériences de cette thèse sont une étape importante sur la voie du calcul conditionnel distribué car elle cadre bien le problème, surtout en ce qui concerne la compétitivité des systèmes d'experts.
Resumo:
Oceans play a vital role in the global climate system. They absorb the incoming solar energy and redistribute the energy through horizontal and vertical transports. In this context it is important to investigate the variation of heat budget components during the formation of a low-pressure system. In 2007, the monsoon onset was on 28th May. A well- marked low-pressure area was formed in the eastern Arabian Sea after the onset and it further developed into a cyclone. We have analysed the heat budget components during different stages of the cyclone. The data used for the computation of heat budget components is Objectively Analyzed air-sea flux data obtained from WHOI (Woods Hole Oceanographic Institution) project. Its horizontal resolution is 1° × 1°. Over the low-pressure area, the latent heat flux was 180 Wm−2. It increased to a maximum value of 210 Wm−2 on 1st June 2007, on which the system was intensified into a cyclone (Gonu) with latent heat flux values ranging from 200 to 250 Wm−2. It sharply decreased after the passage of cyclone. The high value of latent heat flux is attributed to the latent heat release due to the cyclone by the formation of clouds. Long wave radiation flux is decreased sharply from 100 Wm−2 to 30 Wm−2 when the low-pressure system intensified into a cyclone. The decrease in long wave radiation flux is due to the presence of clouds. Net heat flux also decreases sharply to −200 Wm−2 on 1st June 2007. After the passage, the flux value increased to normal value (150 Wm−2) within one day. A sharp increase in the sensible heat flux value (20 Wm−2) is observed on 1st June 2007 and it decreased there- after. Short wave radiation flux decreased from 300 Wm−2 to 90 Wm−2 during the intensification on 1st June 2007. Over this region, short wave radiation flux sharply increased to higher value soon after the passage of the cyclone.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
Voluminous rhyolitic eruptions from Toba, Indonesia, and Taupo Volcanic Zone (TVZ), New Zealand, have dispersed volcanic ash over vast areas in the late Quaternary. The ~74 ka Youngest Toba Tuff (YTT) eruption deposited ash over the Bay of Bengal and the Indian subcontinent to the west. The ~340 ka Whakamaru eruption (TVZ) deposited the widespread Rangitawa Tephra, dominantly to the southeast (in addition to occurrences northwest of vent), extending across the landmass of New Zealand, and the South Pacific Ocean and Tasman Sea, with distal terrestrial exposures on the Chatham Islands. These super-eruptions involved ~2500 km^3 and ~1500 km3 of magma (dense-rock equivalent; DRE), respectively. Ultra-distal terrestrial exposures of YTT at two localities in India, Middle Son Valley, Madhya Pradesh, and Jurreru River Valley, Andhra Pradesh, at distances of >2000 km from the source caldera, show a basal ‘primary’ ashfall unit ~4 cm thick, although deposits containing reworked ash are up to ~3 m in total thickness. Exposures of Rangitawa Tephra on the Chatham Islands, >900 km from the source caldera, are ~15-30 cm thick. At more proximal localities (~200 km from source), Rangitawa Tephra is ~55-70 cm thick and characterized by a crystal-rich basal layer and normal grading. Both distal tephra deposits are characterized by very-fine ash (with high PM10 fractions) and are crystal-poor. Glass chemistry, stratigraphy and grain-size data for these distal tephra deposits are presented with comparisons of their correlation, dispersal and preservation. Using field observations, ash transport and deposition were modeled for both eruptions using a semi-analytical model (HAZMAP), with assumptions concerning average wind direction and strength during eruption, column shape and vent size. Model outputs provide new insights into eruption dynamics and better estimates of eruption volumes associ- ated with tephra fallout. Modeling based on observed YTT distal tephra thicknesses indicate a relatively low (<40 km high), very turbulent eruption column, consistent with deposition from a co-ignimbrite cloud extending over a broad region. Similarly, the Whakamaru eruption was modeled as producing a predominantly Plinian column (~45 km high), with dispersal to the southeast by strong prevailing winds. Significant ash fallout of the main dispersal direction, to the northwest of source, cannot be replicated in this modeling. The widespread dispersal of large volumes of fine ash from both eruptions may have had global environmental consequences, acutely affecting areas up to thousands of kilometers from vent.
Resumo:
Pre-combined SLR-GNSS solutions are studied and the impact of different types of datum definition on the estimated parameters is assessed. It is found that the origin is realized best by using only the SLR core network for defining the geodetic datum and the inclusion of the GNSS core sites degrades the origin. The orientation, however, requires a dense and continuous network, thus, the inclusion of the GNSS core network is absolutely needed.
Resumo:
We will present calculations of opacities for matter under LTE conditions. Opacities are needed in radiation transport codes to study processes like Inertial Confinement Fusion and plasma amplifiers in X-ray secondary sources. For the calculations we use the code BiGBART, with either a hydrogenic approximation with j-splitting or self-consistent data generated with the atomic physics code FAC. We calculate the atomic structure, oscillator strengths, radiative transition energies, including UTA computations, and photoionization cross-sections. A DCA model determines the configurations considered in the computation of the opacities. The opacities obtained with these two models are compared with experimental measurements.
Resumo:
A number of recent studies have investigated the introduction of decoherence in quantum walks and the resulting transition to classical random walks. Interestingly,it has been shown that algorithmic properties of quantum walks with decoherence such as the spreading rate are sometimes better than their purely quantum counterparts. Not only quantum walks with decoherence provide a generalization of quantum walks that naturally encompasses both the quantum and classical case, but they also give rise to new and different probability distribution. The application of quantum walks with decoherence to large graphs is limited by the necessity of evolving state vector whose sizes quadratic in the number of nodes of the graph, as opposed to the linear state vector of the purely quantum (or classical) case. In this technical report,we show how to use perturbation theory to reduce the computational complexity of evolving a continuous-time quantum walk subject to decoherence. More specifically, given a graph over n nodes, we show how to approximate the eigendecomposition of the n2×n2 Lindblad super-operator from the eigendecomposition of the n×n graph Hamiltonian.
Resumo:
In this paper are given examples of tori T² embedded in S³ with all their asymptotic lines dense.
Resumo:
Several numerical methods for boundary value problems use integral and differential operational matrices, expressed in polynomial bases in a Hilbert space of functions. This work presents a sequence of matrix operations allowing a direct computation of operational matrices for polynomial bases, orthogonal or not, starting with any previously known reference matrix. Furthermore, it shows how to obtain the reference matrix for a chosen polynomial base. The results presented here can be applied not only for integration and differentiation, but also for any linear operation.