631 resultados para algorithmic skeletons


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Functional and non-functional concerns require different programming effort, different techniques and different methodologies when attempting to program efficient parallel/distributed applications. In this work we present a "programmer oriented" methodology based on formal tools that permits reasoning about parallel/distributed program development and refinement. The proposed methodology is semi-formal in that it does not require the exploitation of highly formal tools and techniques, while providing a palatable and effective support to programmers developing parallel/distributed applications, in particular when handling non-functional concerns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article, and the research out of which it springs, has a number of points of origin; it may also have more than one point of conclusion even as it argues that the current manifestation of Las Vegas could well be read as its last. This is not to say that Las Vegas will cease to create its new versions of itself; after all, this is one of the main sustaining factors of Las Vegas’ success in the last two decades as a new Strip on Las Vegas Boulevard has arisen from the demolitions and redesigns of the original Las Vegas Strip of the 1950s and 1960s. What is argued for here is a reading of Vegas as a terminal point within American culture and particularly within its visual realms. Las Vegas’ place within the dynamics of American visual and exhibition culture comes as the latest in a sequence which, since the nineteenth century, has included among its manifestations World’s Fairs, side shows, freak shows and travelling carnivals. America’s experiments in the visual domain have been updated in both the twentieth- and twenty-first centuries in a variety of spectacular forms and entertainment zones (Disneyland, EPCOT, the new Las Vegas). Vegas is the ultimate incarnation of a carnivalised display culture, the city’s casino Strip reclothed primarily as a theme park for digital camera-toting tourists than as a resort for dedicated gamblers. The possibility that the current incarnation of Las Vegas of late 2009 and 2010 will be the last Vegas hovers as a spectral remnant of the economic downturn and financial collapse of 2008, marked by the unfinished skeletons of projected new casino hotels on Las Vegas Boulevard and by a sudden reversal of fortune for the nation’s favourite gaming location.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exploiting the underutilisation of variable-length DSP algorithms during normal operation is vital, when seeking to maximise the achievable functionality of an application within peak power budget. A system level, low power design methodology for FPGA-based, variable length DSP IP cores is presented. Algorithmic commonality is identified and resources mapped with a configurable datapath, to increase achievable functionality. It is applied to a digital receiver application where a 100% increase in operational capacity is achieved in certain modes without significant power or area budget increases. Measured results show resulting architectures requires 19% less peak power, 33% fewer multipliers and 12% fewer slices than existing architectures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Identification of the structural domains of proteins is important for our understanding of the organizational principles and mechanisms of protein folding, and for insights into protein function and evolution. Algorithmic methods of dissecting protein of known structure into domains developed so far are based on an examination of multiple geometrical, physical and topological features. Successful as many of these approaches are, they employ a lot of heuristics, and it is not clear whether they illuminate any deep underlying principles of protein domain organization. Other well-performing domain dissection methods rely on comparative sequence analysis. These methods are applicable to sequences with known and unknown structure alike, and their success highlights a fundamental principle of protein modularity, but this does not directly improve our understanding of protein spatial structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measuring the structural similarity of graphs is a challenging and outstanding problem. Most of the classical approaches of the so-called exact graph matching methods are based on graph or subgraph isomorphic relations of the underlying graphs. In contrast to these methods in this paper we introduce a novel approach to measure the structural similarity of directed and undirected graphs that is mainly based on margins of feature vectors representing graphs. We introduce novel graph similarity and dissimilarity measures, provide some properties and analyze their algorithmic complexity. We find that the computational complexity of our measures is polynomial in the graph size and, hence, significantly better than classical methods from, e.g. exact graph matching which are NP-complete. Numerically, we provide some examples of our measure and compare the results with the well-known graph edit distance. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scurvy has increasingly been recognized in archaeological populations since the 1980s but this study represents the first examination of the paleopathological findings of scurvy in a known famine population. The Great Famine (1845–1852) was a watershed in Irish history and resulted in the death of one million people and the mass emigration of just as many. It was initiated by a blight which completely wiped out the potato—virtually the only source of food for the poor of Ireland. This led to mass starvation and a widespread occurrence of infectious and metabolic diseases. A recent discovery of 970 human skeletons from mass burials dating to the height of the famine in Kilkenny City (1847–1851) provided an opportunity to study the skeletal manifestations of scurvy—a disease that became widespread at this time due to the sudden lack of Vitamin C which had previously almost exclusively been provided by the potato. A three-scale diagnostic reliance approach has been employed as a statistical aid for diagnosing the disease in the population. A biocultural approach was adopted to enable the findings to be contextualized and the etiology and impact of the disease explored. The results indicate that scurvy indirectly influenced famine-induced mortality. A sex and stature bias is evident among adults in which males and taller individuals displayed statistically significantly higher levels of scorbutic lesions. The findings have also suggested that new bone formation at the foramen rotundum is a diagnostic criterion for the paleopathological identification of scurvy, particularly among juveniles. Am J Phys Anthropol, 2012. © 2012 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2-Phosphanylethylcyclopentadienyl lithium compounds, Li[C5R'(4)(CH2)(2)PR2] (R = Et, R' = H or Me, R = Ph, R' = Me), have been prepared from the reaction of spirohydrocarbons C5R'(4)(C2H4) with LiPR2. C5Et4HSiMe2CH2PMe2, was prepared from reaction of Li[C5Et4] with Me2SiCl2 followed by Me2PCH2Li. The lithium salts were reacted with [RhCl(CO)2]2,[IrCl(CO)3] or [Co-2(CO)(8)] to give [M(C5R'(4)(CH2) 2PR2)(CO)] (M = Rh, R = Et, R' = H or Me, R= Ph, R' = Me; M = Ir or Co, R = Et, R' = Me), which have been fully characterised, in many cases crystallographically as monomers with coordination of the phosphorus atom and the cyclopentadienyl ring. The values of nu(CO) for these complexes are usually lower than those for the analogous complexes without the bridge between the cyclopentadienyl ring and the phosphine, the exception being [Rh(Cp'(CH2)(2)PEt2)(CO)] (Cp' = C5Me4), the most electron rich of the complexes. [Rh(C5Et4SiMe2CH2PMe2)(CO)] may be a dimer. [Co-2(CO)(8)] reacts with C5H5(CH2)(2)PEt2 or C5Et4HSiMe2CH2PMe2 (L) to give binuclear complexes of the form [Co-2(CO)(6)L-2] with almost linear PCoCoP skeletons. [Rh(Cp'(CH2)(2)PEt2)(CO)] and [Rh(Cp'(CH2)(2)PPh2)(CO)] are active for methanol carbonylation at 150 degrees C and 27 bar CO, with the rate using [Rh(Cp'(CH2)(2)PPh2)(CO)] (0.81 mol dm(-3) h(-1)) being higher than that for [RhI2(CO)(2)](-) (0.64 mol dm(-3) h(-1)). The most electron rich complex, [Rh(Cp'(CH2)(2)PEt2)(CO)] (0.38 mol dm(-3) h(-1)) gave a comparable rate to [Cp*Rh(PEt3)(CO)] (0.30 mol dm(-3) h(-1)), which was unstable towards oxidation of the phosphine. [Rh(Cp'(CH2)(2)PEt2)I-2], which is inactive for methanol carbonylation, was isolated after the methanol carbonylation reaction using [Rh(Cp'(CH2)(2)PEt2)(CO)].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apparent reversals in rotating trapezia have been regarded as evidence that human vision favours methods which are heuristic or form dependent. However, the argument is based on the assumption that general algorithmic methods would avoid the illusion, and that has never been clear. A general algorithm for interpreting moving parallels has been developed to address the issue. It handles a considerable range of stimuli successfully, but finds multiple interpretations in situations which correspond closely to those where apparent reversals occur. This strengthens the hypothesis that apparent reversals may occur when general algorithmic methods fail and heuristics are invoked as a stopgap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biodiversity is not a commodity, nor a service (ecosystem or otherwise), it is a scientific measure of the complexity of a biological system. Rather than directly valuing biodiversity, economists have tended to value its services, more often the services of 'key' species. This is understandable given the confusion of definitions and measures of biodiversity, but weakly justified if biodiversity is not substitutable. We provide a quantitative and comprehensive definition of biodiversity and propose a framework for examining its substitutability as the first step towards valuation. We define biodiversity as a measure of semiotic information. It is equated with biocomplexity and measured by Algorithmic Information Content (AIC). We argue that the potentially valuable component of this is functional information content (FIC) which determines biological fitness and supports ecosystem services. Inspired by recent extensions to the Noah's Ark problem, we show how FIC/AIC can be calculated to measure the degree of substitutability within an ecological community. From this, we derive a way to rank whole communities by Indirect Use Value, through quantifying the relation between system complexity and production rate of ecosystem services. Understanding biodiversity as information evidently serves as a practical interface between economics and ecological science.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of non-linearity in 2D Shape modelling of a particular articulated object: the human body. This issue is partially resolved by applying a different Point Distribution Model (PDM) depending on the viewpoint. The remaining non-linearity is solved by using Gaussian Mixture Models (GMM). A dynamic-based clustering is proposed and carried out in the Pose Eigenspace. A fundamental question when clustering is to determine the optimal number of clusters. From our point of view, the main aspect to be evaluated is the mean gaussianity. This partitioning is then used to fit a GMM to each one of the view-based PDM, derived from a database of Silhouettes and Skeletons. Dynamic correspondences are then obtained between gaussian models of the 4 mixtures. Finally, we compare this approach with other two methods we previously developed to cope with non-linearity: Nearest Neighbor (NN) Classifier and Independent Component Analysis (ICA).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of programmable logic devices as processing platforms for digital signal processing applications poses challenges concerning rapid implementation and high level optimization of algorithms on these platforms. This paper describes Abhainn, a rapid implementation methodology and toolsuite for translating an algorithmic expression of the system to a working implementation on a heterogeneous multiprocessor/field programmable gate array platform, or a standalone system on programmable chip solution. Two particular focuses for Abhainn are the automated but configurable realisation of inter-processor communuication fabrics, and the establishment of novel dedicated hardware component design methodologies allowing algorithm level transformation for system optimization. This paper outlines the approaches employed in both these particular instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple logic of conditional preferences is defined, with a language that allows the compact representation of certain kinds of conditional preference statements, a semantics and a proof theory. CP-nets and TCP-nets can be mapped into this logic, and the semantics and proof theory generalise those of CP-nets and TCP-nets. The system can also express preferences of a lexicographic kind. The paper derives various sufficient conditions for a set of conditional preferences to be consistent, along with algorithmic techniques for checking such conditions and hence confirming consistency. These techniques can also be used for totally ordering outcomes in a way that is consistent with the set of preferences, and they are further developed to give an approach to the problem of constrained optimisation for conditional preferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biodiversity may be seen as a scientific measure of the complexity of a biological system, implying an information basis. Complexity cannot be directly valued, so economists have tried to define the services it provides, though often just valuing the services of 'key' species. Here we provide a new definition of biodiversity as a measure of functional information, arguing that complexity embodies meaningful information as Gregory Bateson defined it. We argue that functional information content (FIC) is the potentially valuable component of total (algorithmic) information content (AIC), as it alone determines biological fitness and supports ecosystem services. Inspired by recent extensions to the Noah's Ark problem, we show how FIC/AIC can be calculated to measure the degree of substitutability within an ecological community. Establishing substitutability is an essential foundation for valuation. From it, we derive a way to rank whole communities by Indirect Use Value, through quantifying the relation between system complexity and the production rate of ecosystem services. Understanding biodiversity as information evidently serves as a practical interface between economics and ecological science. © 2012 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.