981 resultados para Computation time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thermodynamic parameters of the atmosphere form part of the input to numerical forecasting models. Usually these parameters are evaluated from a thermodynamic diagram. Here, a technique is developed to evaluate these parameters quickly and accurately using a Fortran program. This technique is tested with four sets of randomly selected data and the results are in agreement with the results from the conventional method. This technique is superior to the conventional method in three respects: more accuracy, less computation time, and evaluation of additional parameters. The computation time for all the parameters on a PC AT 286 machine is II sec. This software, with appropriate modifications, can be used, for verifying various lines on a thermodynamic diagram

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Während der letzten 20 Jahre hat sich das Periodensystem bis zu den Elementen 114 und 116 erweitert. Diese sind kernphysikalisch nachgewiesen, so dass jetzt die chemische Untersuchung an erster Selle steht. Nachdem sich das Periodensystem bis zum Element 108 so verhält, wie man es dem Periodensystem nach annimmt, wird in dieser Arbeit die Chemie des Elements 112 untersucht. Dabei geht es um die Adsorptionsenergie auf einer Gold-Ober fläche, weil dies der physikalisch/chemische Prozess ist, der bei der Analyse angewandt wird. Die Methode, die in dieser Arbeit angwandt wird, ist die relativistische Dichtefunktionalmethode. Im ersten Teil wird das Vielkörperproblem in allgemeiner Form behandelt, und im zweiten die grundlegenden Eigenschaften und Formulierungen der Dichtefunktionaltheorie. Die Arbeit beschreibt zwei prinzipiell unterschiedliche Ansätze, wie die Adsorptionsenergie berechnet werden kann. Zum einen ist es die sogenannte Clustermethode, bei der ein Atom auf ein relativ kleines Cluster aufgebracht und dessen Adsorptionsenergie berechnet wird. Wenn es gelingt, die Konvergenz mit der Größe des Clusters zu erreichen, sollte dies zu einem Wert für die Adsorptionsenergie führen. Leider zeigt sich in den Rechnungen, dass aufgrund des zeitlichen Aufwandes die Konvergenz für die Clusterrechnungen nicht erreicht wird. Es werden sehr ausführlich die drei verschiedenen Adsorptionsplätze, die Top-, die Brücken- und die Muldenposition, berechnet. Sehr viel mehr Erfolg erzielt man mit der Einbettungsmethode, bei der ein kleiner Cluster von vielen weiteren Atomen an den Positionen, die sie im Festkörpers auf die Adsorptionsenergie soweit sichergestellt ist, dass physikalisch-chemisch gute Ergebnisse erzielt werden. Alle hier gennanten Rechnungen sowohl mit der Cluster- wie mit der Einbettungsmethode verlangen sehr, sehr lange Rechenzeiten, die, wie oben bereits erwähnt, nicht zu einer Konvergenz für die Clusterrechnungen ausreichten. In der Arbeit wird bei allen Rechnungen sehr detailliert auf die Abhängigkeit von den möglichen Basissätzen eingegangen, die ebenfalls in entscheidender Weise zur Länge und Qualität der Rechnungen beitragen. Die auskonvergierten Rechnungen werden in der Form von Potentialkurven, Density of States (DOS), Overlap Populations sowie Partial Crystal Overlap Populations analysiert. Im Ergebnis zeigt sich, dass die Adsoptionsenergie für das Element 112 auf einer Goldoberfläche ca. 0.2 eV niedriger ist als die Adsorption von Quecksilber auf der gleichen Ober fläche. Mit diesem Ergebnis haben die experimentellen Kernchemiker einen Wert an der Hand, mit dem sie eine Anhaltspunkt haben, wo sie bei den Messungen die wenigen zu erwartenden Ereignisse finden können.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most effective techniques offering QoS routing is minimum interference routing. However, it is complex in terms of computation time and is not oriented toward improving the network protection level. In order to include better levels of protection, new minimum interference routing algorithms are necessary. Minimizing the failure recovery time is also a complex process involving different failure recovery phases. Some of these phases depend completely on correct routing selection, such as minimizing the failure notification time. The level of protection also involves other aspects, such as the amount of resources used. In this case shared backup techniques should be considered. Therefore, minimum interference techniques should also be modified in order to include sharing resources for protection in their objectives. These aspects are reviewed and analyzed in this article, and a new proposal combining minimum interference with fast protection using shared segment backups is introduced. Results show that our proposed method improves both minimization of the request rejection ratio and the percentage of bandwidth allocated to backup paths in networks with low and medium protection requirements

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, where asset value and population density are greatest, the model spatial resolution required to represent flows through a typical street network (i.e. < 10m) often results in impractical computational cost at the whole city scale. Explicit diffusive storage cell models become very inefficient at such high resolutions, relative to shallow water models, because the stable time step in such schemes scales as a quadratic of resolution. This paper presents the calibration and evaluation of a recently developed new formulation of the LISFLOOD-FP model, where stability is controlled by the Courant–Freidrichs–Levy condition for the shallow water equations, such that, the stable time step instead scales linearly with resolution. The case study used is based on observations during the summer 2007 floods in Tewkesbury, UK. Aerial photography is available for model evaluation on three separate days from the 24th to the 31st of July. The model covered a 3.6 km by 2 km domain and was calibrated using gauge data from high flows during the previous month. The new formulation was benchmarked against the original version of the model at 20 m and 40 m resolutions, demonstrating equally accurate performance given the available validation data but at 67x faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in a significantly more accurate simulation of the drying dynamics compared to that simulated by the coarse resolution models, although estimates of peak inundation depth were similar.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, the model spatial resolution required to represent flows through a typical street network often results in an impractical computational cost at the city scale. This paper presents the calibration and evaluation of a recently developed formulation of the LISFLOOD-FP model, which is more computationally efficient at these resolutions. Aerial photography was available for model evaluation on 3 days from the 24 to the 31 of July. The new formulation was benchmarked against the original version of the model at 20 and 40 m resolutions, demonstrating equally accurate simulation, given the evaluation data but at a 67 times faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in more accurate simulation of the floodplain drying dynamics compared with the coarse resolution models, although maximum inundation levels were simulated equally well at all resolutions tested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High spatial resolution environmental data gives us a better understanding of the environmental factors affecting plant distributions at fine spatial scales. However, large environmental datasets dramatically increase compute times and output species model size stimulating the need for an alternative computing solution. Cluster computing offers such a solution, by allowing both multiple plant species Environmental Niche Models (ENMs) and individual tiles of high spatial resolution models to be computed concurrently on the same compute cluster. We apply our methodology to a case study of 4,209 species of Mediterranean flora (around 17% of species believed present in the biome). We demonstrate a 16 times speed-up of ENM computation time when 16 CPUs were used on the compute cluster. Our custom Java ‘Merge’ and ‘Downsize’ programs reduce ENM output files sizes by 94%. The median 0.98 test AUC score of species ENMs is aided by various species occurrence data filtering techniques. Finally, by calculating the percentage change of individual grid cell values, we map the projected percentages of plant species vulnerable to climate change in the Mediterranean region between 1950–2000 and 2020.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modelling of disorder in organic crystals is highly desirable since it would allow thermodynamic stabilities and other disorder-sensitive properties to be estimated for such systems. Two disordered organic molecular systems are modeled using a symmetry-adapted ensemble approach, in which the disordered system is treated as an ensemble of the configurations of a supercell with respect to substitution of one disorder component for another. Computation time is kept manageable by performing calculations only on the symmetrically inequivalent configurations. Calculations are presented on a substitutionally disordered system, the dichloro/dibromobenzene solid solution, and on an orientationally disordered system, eniluracil, and the resultant free energies, disorder patterns, and system properties are discussed. The results are found to be in agreement with experiment following manual removal of physically implausible configurations from ensemble averages, highlighting the dangers of a completely automated approach to organic crystal thermodynamics which ignores the barriers to equilibration once the crystal has been formed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Theoretical estimates for the cutoff errors in the Ewald summation method for dipolar systems are derived. Absolute errors in the total energy, forces and torques, both for the real and reciprocal space parts, are considered. The applicability of the estimates is tested and confirmed in several numerical examples. We demonstrate that these estimates can be used easily in determining the optimal parameters of the dipolar Ewald summation in the sense that they minimize the computation time for a predefined, user set, accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper made an analysis of some numerical integration methods that can be used in electromagnetic transient simulations. Among the existing methods, we analyzed the trapezoidal integration method (or Heun formula), Simpson's Rule and Runge-Kutta. These methods were used in simulations of electromagnetic transients in power systems, resulting from switching operations and maneuvers that occur in transmission lines. Analyzed the characteristics such as accuracy, computation time and robustness of the methods of integration.