945 resultados para Source code visualization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A brief description of a software environment in FORTRAN77 for the modelling of multi-physics phenomena is given. The numerical approach is based on finite volume methods but extended to unstructured meshes (ie. FV-UM). A range of interacting solution procedures for turbulent fluid flow, heat transfer with solidification/melting and elasto-visco-plastic solid mechanics are implemented in the first version of PHYSICA, which will be released in source code form to the academic community in late 1995.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Use of structuring mechanisms (such as modularisation) is widely believed to be one of the key ways to improve software quality. Structuring is considered to be at least as important for specification documents as for source code, since it is assumed to improve comprehensibility. Yet, as with most widely held assumptions in software engineering, there is little empirical evidence to support this hypothesis. Also, even if structuring can be shown to he a good thing, we do not know how much structuring is somehow optimal. One of the more popular formal specification languages, Z, encourages structuring through its schema calculus. A controlled experiment is described in which two hypotheses about the effects of structure on the comprehensibility of Z specifications are tested. Evidence was found that structuring a specification into schemas of about 20 lines long significantly improved comprehensibility over a monolithic specification. However, there seems to be no perceived advantage in breaking down the schemas into much smaller components. The experiment can he fully replicated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper will discuss Computational Fluid Dynamics (CFD) results from an investigation into the accuracy of several turbulence models to predict air cooling for electronic packages and systems. Also new transitional turbulence models will be proposed with emphasis on hybrid techniques that use the k-ε model at an appropriate distance away from the wall and suitable models, with wall functions, near wall regions. A major proportion of heat emitted from electronic packages can be extracted by air cooling. This flow of air throughout an electronic system and the heat extracted is highly dependent on the nature of turbulence present in the flow. The use of CFD for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. The PHYSICA Finite Volume code was used for this investigation. With the exception of the k-ε and k-ω models which are available as standard within PHYSICA, all other turbulence models mentioned were implemented via the source code by the authors. The LVEL, LVEL CAP, Wolfshtein, k-ε, k-ω, SST and kε/kl models are described and compared with experimental data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Sahara desert is a significant source of particulate pollution not only to the Mediterranean region, but also to the Atlantic and beyond. In this paper, PM 10 exceedences recorded in the UK and the island of Crete are studied and their source investigated, using Lagrangian Particle Dispersion (LPD) methods. Forward and inverse simulations identify Saharan dust storms as the primary source of these episodes. The methodology used allows comparison between this primary source and other possible candidates, for example large forest fires or volcanic eruptions. Two LPD models are used in the simulations, namely the open source code FLEXPART and the proprietary code HYSPLIT. Driven by the same meteorological fields (the ECMWF MARS archive and the PSU/NCAR Mesoscale model, known as MM5) the codes produce similar, but not identical predictions. This inter-model comparison enables a critical assessment of the physical modelling assumptions employed in each code, plus the influence of boundary conditions and solution grid density. The outputs, in the form of particle concentrations evolving in time, are compared against satellite images and receptor data from multiple ground-based sites. Quantitative comparisons are good, especially in predicting the time of arrival of the dust plume in a particular location.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The European Skynet Radiometers network (EuroSkyRad or ESR) has been recently established as a research network of European PREDE sun-sky radiometers. Moreover, ESR is federated with SKYNET, an international network of PREDE sun-sky radiometers mostly present in East Asia. In contrast to SKYNET, the European network also integrates users of the CIMEL CE318 sky–sun photometer. Keeping instrumental duality in mind, a set of open source algorithms has been developed consisting of two modules for (1) the retrieval of direct sun products (aerosol optical depth, wavelength exponent and water vapor) from the sun extinction measurements; and (2) the inversion of the sky radiance to derive other aerosol optical properties such as size distribution, single scattering albedo or refractive index. In this study we evaluate the ESR direct sun products in comparison with the AERosol RObotic NETwork (AERONET) products. Specifically, we have applied the ESR algorithm to a CIMEL CE318 and PREDE POM simultaneously for a 4-yr database measured at the Burjassot site (Valencia, Spain), and compared the resultant products with the AERONET direct sun measurements obtained with the same CIMEL CE318 sky–sun photometer. The comparison shows that aerosol optical depth differences are mostly within the nominal uncertainty of 0.003 for a standard calibration instrument, and fall within the nominal AERONET uncertainty of 0.01–0.02 for a field instrument in the spectral range 340 to 1020 nm. In the cases of the Ångström exponent and the columnar water vapor, the differences are lower than 0.02 and 0.15 cm, respectively. Therefore, we present an open source code program that can be used with both CIMEL and PREDE sky radiometers and whose results are equivalent to AERONET and SKYNET retrievals.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe the Density Matrix Renormalization Group algorithms for time dependent and time independent Hamiltonians. This paper is a brief but comprehensive introduction to the subject for anyone willing to enter in the field or write the program source code from scratch. An open source version of the code can be found at: http://www.dmrg.it.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relationship between changes in retinal vessel morphology and the onset and progression of diseases such as diabetes, hypertension and retinopathy of prematurity (ROP) has been the subject of several large scale clinical studies. However, the difficulty of quantifying changes in retinal vessels in a sufficiently fast, accurate and repeatable manner has restricted the application of the insights gleaned from these studies to clinical practice. This paper presents a novel algorithm for the efficient detection and measurement of retinal vessels, which is general enough that it can be applied to both low and high resolution fundus photographs and fluorescein angiograms upon the adjustment of only a few intuitive parameters. Firstly, we describe the simple vessel segmentation strategy, formulated in the language of wavelets, that is used for fast vessel detection. When validated using a publicly available database of retinal images, this segmentation achieves a true positive rate of 70.27%, false positive rate of 2.83%, and accuracy score of 0.9371. Vessel edges are then more precisely localised using image profiles computed perpendicularly across a spline fit of each detected vessel centreline, so that both local and global changes in vessel diameter can be readily quantified. Using a second image database, we show that the diameters output by our algorithm display good agreement with the manual measurements made by three independent observers. We conclude that the improved speed and generality offered by our algorithm are achieved without sacrificing accuracy. The algorithm is implemented in MATLAB along with a graphical user interface, and we have made the source code freely available. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speeding up sequential programs on multicores is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exempli?ed by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful. This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to ?nd thread-level parallelism. Each of the annotations speci?es only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily veri?ed. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to autoparallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor. The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the ef?ciency of parallel programming.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work suggests that the human ear varies significantly between different subjects and can be used for identification. In principle, therefore, using ears in addition to the face within a recognition system could improve accuracy and robustness, particularly for non-frontal views. The paper describes work that investigates this hypothesis using an approach based on the construction of a 3D morphable model of the head and ear. One issue with creating a model that includes the ear is that existing training datasets contain noise and partial occlusion. Rather than exclude these regions manually, a classifier has been developed which automates this process. When combined with a robust registration algorithm the resulting system enables full head morphable models to be constructed efficiently using less constrained datasets. The algorithm has been evaluated using registration consistency, model coverage and minimalism metrics, which together demonstrate the accuracy of the approach. To make it easier to build on this work, the source code has been made available online.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.

https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dependence clusters are (maximal) collections of mutually dependent source code entities according to some dependence relation. Their presence in software complicates many maintenance activities including testing, refactoring, and feature extraction. Despite several studies finding them common in production code, their formation, identification, and overall structure are not well understood, partly because of challenges in approximating true dependences between program entities. Previous research has considered two approximate dependence relations: a fine-grained statement-level relation using control and data dependences from a program’s System Dependence Graph and a coarser relation based on function-level controlflow reachability. In principal, the first is more expensive and more precise than the second. Using a collection of twenty programs, we present an empirical investigation of the clusters identified by these two approaches. In support of the analysis, we consider hybrid cluster types that works at the coarser function-level but is based on the higher-precision statement-level dependences. The three types of clusters are compared based on their slice sets using two clustering metrics. We also perform extensive analysis of the programs to identify linchpin functions – functions primarily responsible for holding a cluster together. Results include evidence that the less expensive, coarser approaches can often be used as e�ective proxies for the more expensive, finer-grained approaches. Finally, the linchpin analysis shows that linchpin functions can be e�ectively and automatically identified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Observation-based slicing is a recently-introduced, language-independent, slicing technique based on the dependencies observable from program behaviour. Due to the wellknown limits of dynamic analysis, we may only compute an under-approximation of the true observation-based slice. However, because the observation-based slice captures all possible dependence that can be observed, even such approximations can yield insight into the limitations of static slicing. For example, a static slice, S that is strictly smaller than the corresponding observation based slice is guaranteed to be unsafe. We present the results of three sets of experiments on 12 different programs, including benchmarks and larger programs, which investigate the relationship between static and observation-based slicing. We show that, in extreme cases, observation-based slices can find the true static minimal slice, where static techniques cannot. For more typical cases, our results illustrate the potential for observation-based slicing to highlight unsafe static slices. Finally, we report on the sensitivity of observation-based slicing to test quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real-time systems demand guaranteed and predictable run-time behaviour in order to ensure that no task has missed its deadline. Over the years we are witnessing an ever increasing demand for functionality enhancements in the embedded real-time systems. Along with the functionalities, the design itself grows more complex. Posed constraints, such as energy consumption, time, and space bounds, also require attention and proper handling. Additionally, efficient scheduling algorithms, as proven through analyses and simulations, often impose requirements that have significant run-time cost, specially in the context of multi-core systems. In order to further investigate the behaviour of such systems to quantify and compare these overheads involved, we have developed the SPARTS, a simulator of a generic embedded real- time device. The tasks in the simulator are described by externally visible parameters (e.g. minimum inter-arrival, sporadicity, WCET, BCET, etc.), rather than the code of the tasks. While our current implementation is primarily focused on our immediate needs in the area of power-aware scheduling, it is designed to be extensible to accommodate different task properties, scheduling algorithms and/or hardware models for the application in wide variety of simulations. The source code of the SPARTS is available for download at [1].