947 resultados para Algorithmic skeleton


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reductive cyclisation of ail E-vinyl bromide with ail allylic acetate proceeds under palladium catalysis 10 give the 8-dehydropumiliotoxin skeleton, a potential advanced precursor to 8-deoxypumiliotoxin alkaloids. Control of the stereochemistry of the E-vinyl bromide precursor is achieved readily using the Kogen or Bruckner bromophosphonate reagents and the reductive cyclisation proceeds with retention of the vinyl bromide stereochemistry. The mechanism for the cyclisation involves an in situ conversion of the allylic acetate to ail allyl stannane followed by ail intramolecular Stille-type coupling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The classification of protein structures is an important and still outstanding problem. The purpose of this paper is threefold. First, we utilize a relation between the Tutte and homfly polynomial to show that the Alexander-Conway polynomial can be algorithmically computed for a given planar graph. Second, as special cases of planar graphs, we use polymer graphs of protein structures. More precisely, we use three building blocks of the three-dimensional protein structure-alpha-helix, antiparallel beta-sheet, and parallel beta-sheet-and calculate, for their corresponding polymer graphs, the Tutte polynomials analytically by providing recurrence equations for all three secondary structure elements. Third, we present numerical results comparing the results from our analytical calculations with the numerical results of our algorithm-not only to test consistency, but also to demonstrate that all assigned polynomials are unique labels of the secondary structure elements. This paves the way for an automatic classification of protein structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scrapers have established an important position in the earthmoving field as they are independently capable of accomplishing an earthmoving operation. Given that loading a scraper to its capacity does not entail its maximum production, optimizing the scraper’s loading time is an essential prerequisite for successful operations management. The relevant literature addresses the loading time optimization through a graphical method that is founded on the invalid assumption that the hauling time is independent of the load time. To correct this, a new algorithmic optimization method that incorporates the golden section search and the bisection algorithm is proposed. Comparison of the results derived from the proposed and the existing method demonstrates that the latter entails the systematic needless prolongation of the loading stage thus resulting in reduced hourly production and increased cost. Therefore, the proposed method achieves an improved modeling of scraper earthmoving operations and contributes toward a more efficient cost management.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reverse engineering of a skeleton based programming environment and redesign to distribute management activities of the system and thereby remove a potential single point of failure is considered. The Ore notation is used to facilitate abstraction of the design and analysis of its properties. It is argued that Ore is particularly suited to this role as this type of management is essentially an orchestration activity. The Ore specification of the original version of the system is modified via a series of semi-formally justified derivation steps to obtain a specification of the decentralized management version which is then used as a basis for its implementation. Analysis of the two specifications allows qualitative prediction of the expected performance of the derived version with respect to the original, and this prediction is borne out in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urocortin (Ucn 1), a 40 amino acid long peptide related to corticotropin releasing factor (CRF) was discovered 19 years ago, based on its sequence homology to the parent molecule. Its existence was inferred in the CNS because of anatomical and pharmacological discrepancies between CRF and its two receptor subtypes. Although originally found in the brain, where it has opposing actions to CRF and therefore confers stress-coping mechanisms, Ucn 1 has subsequently been found throughout the periphery including heart, lung, skin, and immune cells. It is now well established that this small peptide is involved in a multitude of physiological and pathophysiological processes, due to its receptor subtype distribution and promiscuity in second messenger signalling pathways. As a result of extensive studies in this field, there are now well over one thousand peer reviewed publications involving Ucn 1. In this review, we intend to highlight some of the less well known actions of Ucn 1 and in particular its role in neuronal cell protection and maintenance of the skeletal system, both by conventional methods of reviewing the literature and using bioinformatics, to highlight further associations between Ucn 1 and disease conditions. Understanding how Ucn 1 works in these tissues, will help to unravel its role in normal and pathophysiological processes. This would ultimately allow the generation of putative medical interventions for the alleviation of important diseases such as Parkinson's disease, arthritis, and osteoporosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexicalpragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

KAAD (Katholischer Akademischer Ausländer-Dienst)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mit Hilfe der Vorhersage von Kontexten können z. B. Dienste innerhalb einer ubiquitären Umgebung proaktiv an die Bedürfnisse der Nutzer angepasst werden. Aus diesem Grund hat die Kontextvorhersage einen signifikanten Stellenwert innerhalb des ’ubiquitous computing’. Nach unserem besten Wissen, verwenden gängige Ansätze in der Kontextvorhersage ausschließlich die Kontexthistorie des Nutzers als Datenbasis, dessen Kontexte vorhersagt werden sollen. Im Falle, dass ein Nutzer unerwartet seine gewohnte Verhaltensweise ändert, enthält die Kontexthistorie des Nutzers keine geeigneten Informationen, um eine zuverlässige Kontextvorhersage zu gewährleisten. Daraus folgt, dass Vorhersageansätze, die ausschließlich die Kontexthistorie des Nutzers verwenden, dessen Kontexte vorhergesagt werden sollen, fehlschlagen könnten. Um die Lücke der fehlenden Kontextinformationen in der Kontexthistorie des Nutzers zu schließen, führen wir den Ansatz zur kollaborativen Kontextvorhersage (CCP) ein. Dabei nutzt CCP bestehende direkte und indirekte Relationen, die zwischen den Kontexthistorien der verschiedenen Nutzer existieren können, aus. CCP basiert auf der Singulärwertzerlegung höherer Ordnung, die bereits erfolgreich in bestehenden Empfehlungssystemen eingesetzt wurde. Um Aussagen über die Vorhersagegenauigkeit des CCP Ansatzes treffen zu können, wird dieser in drei verschiedenen Experimenten evaluiert. Die erzielten Vorhersagegenauigkeiten werden mit denen von drei bekannten Kontextvorhersageansätzen, dem ’Alignment’ Ansatz, dem ’StatePredictor’ und dem ’ActiveLeZi’ Vorhersageansatz, verglichen. In allen drei Experimenten werden als Evaluationsbasis kollaborative Datensätze verwendet. Anschließend wird der CCP Ansatz auf einen realen kollaborativen Anwendungsfall, den proaktiven Schutz von Fußgängern, angewendet. Dabei werden durch die Verwendung der kollaborativen Kontextvorhersage Fußgänger frühzeitig erkannt, die potentiell Gefahr laufen, mit einem sich nähernden Auto zu kollidieren. Als kollaborative Datenbasis werden reale Bewegungskontexte der Fußgänger verwendet. Die Bewegungskontexte werden mittels Smartphones, welche die Fußgänger in ihrer Hosentasche tragen, gesammelt. Aus dem Grund, dass Kontextvorhersageansätze in erster Linie personenbezogene Kontexte wie z.B. Standortdaten oder Verhaltensmuster der Nutzer als Datenbasis zur Vorhersage verwenden, werden rechtliche Evaluationskriterien aus dem Recht des Nutzers auf informationelle Selbstbestimmung abgeleitet. Basierend auf den abgeleiteten Evaluationskriterien, werden der CCP Ansatz und weitere bekannte kontextvorhersagende Ansätze bezüglich ihrer Rechtsverträglichkeit untersucht. Die Evaluationsergebnisse zeigen die rechtliche Kompatibilität der untersuchten Vorhersageansätze bezüglich des Rechtes des Nutzers auf informationelle Selbstbestimmung auf. Zum Schluss wird in der Dissertation ein Ansatz für die verteilte und kollaborative Vorhersage von Kontexten vorgestellt. Mit Hilfe des Ansatzes wird eine Möglichkeit aufgezeigt, um den identifizierten rechtlichen Probleme, die bei der Vorhersage von Kontexten und besonders bei der kollaborativen Vorhersage von Kontexten, entgegenzuwirken.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-level introduction for web science students, rather than for computer science students.