182 resultados para Paciaudi, Paolo, 1710-1785


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A BSP (Bulk Synchronous Parallelism) computation is characterized by the generation of asynchronous messages in packages during independent execution of a number of processes and their subsequent delivery at synchronization points. Bundling messages together represents a significant departure from the traditional ‘one communication at a time’ approach. In this paper the semantic consequences of communication packaging are explored. In particular, the BSP communication structure is identified with a general form of substitution—predicate substitution. Predicate substitution provides a means of reasoning about the synchronized delivery of asynchronous communications when the immediate programming context does not explicitly refer to the variables that are to be updated (unlike traditional operations, such as the assignment $x := e$, where the names of the updated variables can be extracted from the context). Proofs of implementations of Newton's root finding method and prefix sum are used to illustrate the practical application of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper, chosen as a best paper from the 2004 SAMOS Workshop on Computer Systems: describes a novel, efficient methodology for automatically creating embedded DSP computer systems. The novelty arises since now embedded electronic signal processing systems, such as radar or sonar, can be designed by anyone from the algorithm level, i.e. no low level system design experience is required, whilst still achieving low controllable implementation overheads and high real time performance. In the chosen design example, a bank of Normalised Lattice Filter (NLF) components is created which a four-fold reduction in the required processing resource with no performance decrease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract To achieve higher flexibility and to better satisfy actual customer requirements, there is an increasing tendency to develop and deliver software in an incremental fashion. In adopting this process, requirements are delivered in releases and so a decision has to be made on which requirements should be delivered in which release. Three main considerations that need to be taken account of are the technical precedences inherent in the requirements, the typically conflicting priorities as determined by the representative stakeholders, as well as the balance between required and available effort. The technical precedence constraints relate to situations where one requirement cannot be implemented until another is completed or where one requirement is implemented in the same increment as another one. Stakeholder preferences may be based on the perceived value or urgency of delivered requirements to the different stakeholders involved. The technical priorities and individual stakeholder priorities may be in conflict and difficult to reconcile. This paper provides (i) a method for optimally allocating requirements to increments; (ii) a means of assessing and optimizing the degree to which the ordering conflicts with stakeholder priorities within technical precedence constraints; (iii) a means of balancing required and available resources for all increments; and (iv) an overall method called EVOLVE aimed at the continuous planning of incremental software development. The optimization method used is iterative and essentially based on a genetic algorithm. A set of the most promising candidate solutions is generated to support the final decision. The paper evaluates the proposed approach using a sample project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research published in the foremost international journal in information theory and shows interplay between complex random matrix and multiantenna information theory. Dr T. Ratnarajah is leader in this area of research and his work has been contributed in the development of graduate curricula (course reader) in Massachusetts Institute of Technology (MIT), USA, By Professor Alan Edelman. The course name is "The Mathematics and Applications of Random Matrices", see http://web.mit.edu/18.338/www/projects.html

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present Ca II K (lambda(air) = 3933.661 angstrom) interstellar observations towards 20 early-type stars, to place lower distance limits to intermediate- and high-velocity clouds (IHVCs) in their lines of sight. The spectra are also employed to estimate the Ca abundance in the low-velocity gas towards these objects, when combined with Leiden-Dwingeloo 21-cm HI survey data of spatial resolution 0 degrees.5. Nine of the stars, which lie towards IHVC complexes H, K and gp, were observed with the intermediate dispersion spectrograph on the Isaac Newton Telescope at a resolution R = lambda/Delta lambda of 9000 (similar to 33 km s(-1)) and signal-to-noise ratio (S/N) per pixel of 75-140. A further nine objects were observed with the Utrecht Echelle Spectrograph on the William Herschel Telescope at R = 40 000 (similar to 7.5 km s(-1)) and S/N per pixel of 10-25. Finally, two objects were observed in both Ca II K and Na I D lines using the 2D COUDE on the McDonald 2.7-m telescope at R = 35 000 (similar to 8.5 km s(-1)). The abundance of Ca II K {log(10)(A) = log(10)[N(Ca II K)]-log(10)[N(HI)]} plotted against HI column density for the objects in the current sample with heights above the Galactic plane (z) exceeding 1000 pc is found to obey the Wakker & Mathis (2000) relation. Also, the reduced column density of Ca II K as function of z is consistent with the larger sample taken from Smoker et al. (2003). Higher S/N observations than those previously taken towards HVC complex H stars HD 13256 and HILT 190 reinforce the assertion that this lies at a distance exceeding 4000 pc. No obvious absorption is detected in observations of ALS 10407 and HD 357657 towards IVC complex gp. The latter star has a spectroscopically estimated distance of similar to 2040 pc, although this was derived assuming the star lies on the main sequence and without any reddening correction being applied. Finally, no Ca II K absorption is detected towards two stars along the line of sight to complex K, namely PG 1610+529 and PG 1710+490. The latter is at a distance of similar to 700 pc, hence placing a lower distance limit to this complex, where previously only an upper distance limit of 6800 pc was available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historical GIS has the potential to re-invigorate our use of statistics from historical censuses and related sources. In particular, areal interpolation can be used to create long-run time-series of spatially detailed data that will enable us to enhance significantly our understanding of geographical change over periods of a century or more. The difficulty with areal interpolation, however, is that the data that it generates are estimates which will inevitably contain some error. This paper describes a technique that allows the automated identification of possible errors at the level of the individual data values.