959 resultados para VHDL (Computer hardware description language)
Resumo:
Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.
Resumo:
Eine wesentliche Funktionalität bei der Verwendung semantischer Technologien besteht in dem als Reasoning bezeichneten Prozess des Ableitens von impliziten Fakten aus einer explizit gegebenen Wissensbasis. Der Vorgang des Reasonings stellt vor dem Hintergrund der stetig wachsenden Menge an (semantischen) Informationen zunehmend eine Herausforderung in Bezug auf die notwendigen Ressourcen sowie der Ausführungsgeschwindigkeit dar. Um diesen Herausforderungen zu begegnen, adressiert die vorliegende Arbeit das Reasoning durch eine massive Parallelisierung der zugrunde liegenden Algorithmen und der Einführung von Konzepten für eine ressourceneffiziente Ausführung. Diese Ziele werden unter Berücksichtigung der Verwendung eines regelbasierten Systems verfolgt, dass im Gegensatz zur Implementierung einer festen Semantik die Definition der anzuwendenden Ableitungsregeln während der Laufzeit erlaubt und so eine größere Flexibilität bei der Nutzung des Systems bietet. Ausgehend von einer Betrachtung der Grundlagen des Reasonings und den verwandten Arbeiten aus den Bereichen des parallelen sowie des regelbasierten Reasonings werden zunächst die Funktionsweise von Production Systems sowie die dazu bereits existierenden Ansätze für die Optimierung und im Speziellen der Parallelisierung betrachtet. Production Systems beschreiben die grundlegende Funktionalität der regelbasierten Verarbeitung und sind somit auch die Ausgangsbasis für den RETE-Algorithmus, der zur Erreichung der Zielsetzung der vorliegenden Arbeit parallelisiert und für die Ausführung auf Grafikprozessoren (GPUs) vorbereitet wird. Im Gegensatz zu bestehenden Ansätzen unterscheidet sich die Parallelisierung insbesondere durch die gewählte Granularität, die nicht durch die anzuwendenden Regeln, sondern von den Eingabedaten bestimmt wird und sich damit an der Zielarchitektur orientiert. Aufbauend auf dem Konzept der parallelen Ausführung des RETE-Algorithmus werden Methoden der Partitionierung und Verteilung der Arbeitslast eingeführt, die zusammen mit Konzepten der Datenkomprimierung sowie der Verteilung von Daten zwischen Haupt- und Festplattenspeicher ein Reasoning über Datensätze mit mehreren Milliarden Fakten auf einzelnen Rechnern erlauben. Eine Evaluation der eingeführten Konzepte durch eine prototypische Implementierung zeigt für die adressierten leichtgewichtigen Ontologiesprachen einerseits die Möglichkeit des Reasonings über eine Milliarde Fakten auf einem Laptop, was durch die Reduzierung des Speicherbedarfs um rund 90% ermöglicht wird. Andererseits kann der dabei erzielte Durchsatz mit aktuellen State of the Art Reasonern verglichen werden, die eine Vielzahl an Rechnern in einem Cluster verwenden.
Resumo:
Free-word order languages have long posed significant problems for standard parsing algorithms. This thesis presents an implemented parser, based on Government-Binding (GB) theory, for a particular free-word order language, Warlpiri, an aboriginal language of central Australia. The words in a sentence of a free-word order language may swap about relatively freely with little effect on meaning: the permutations of a sentence mean essentially the same thing. It is assumed that this similarity in meaning is directly reflected in the syntax. The parser presented here properly processes free word order because it assigns the same syntactic structure to the permutations of a single sentence. The parser also handles fixed word order, as well as other phenomena. On the view presented here, there is no such thing as a "configurational" or "non-configurational" language. Rather, there is a spectrum of languages that are more or less ordered. The operation of this parsing system is quite different in character from that of more traditional rule-based parsing systems, e.g., context-free parsers. In this system, parsing is carried out via the construction of two different structures, one encoding precedence information and one encoding hierarchical information. This bipartite representation is the key to handling both free- and fixed-order phenomena. This thesis first presents an overview of the portion of Warlpiri that can be parsed. Following this is a description of the linguistic theory on which the parser is based. The chapter after that describes the representations and algorithms of the parser. In conclusion, the parser is compared to related work. The appendix contains a substantial list of test cases ??th grammatical and ungrammatical ??at the parser has actually processed.
Resumo:
This report examines why women pursue careers in computer science and related fields far less frequently than men do. In 1990, only 13% of PhDs in computer science went to women, and only 7.8% of computer science professors were female. Causes include the different ways in which boys and girls are raised, the stereotypes of female engineers, subtle biases that females face, problems resulting from working in predominantly male environments, and sexual biases in language. A theme of the report is that women's underrepresentation is not primarily due to direct discrimination but to subconscious behavior that perpetuates the status quo.
Resumo:
Formalizing linguists' intuitions of language change as a dynamical system, we quantify the time course of language change including sudden vs. gradual changes in languages. We apply the computer model to the historical loss of Verb Second from Old French to modern French, showing that otherwise adequate grammatical theories can fail our new evolutionary criterion.
Resumo:
The memory hierarchy is the main bottleneck in modern computer systems as the gap between the speed of the processor and the memory continues to grow larger. The situation in embedded systems is even worse. The memory hierarchy consumes a large amount of chip area and energy, which are precious resources in embedded systems. Moreover, embedded systems have multiple design objectives such as performance, energy consumption, and area, etc. Customizing the memory hierarchy for specific applications is a very important way to take full advantage of limited resources to maximize the performance. However, the traditional custom memory hierarchy design methodologies are phase-ordered. They separate the application optimization from the memory hierarchy architecture design, which tend to result in local-optimal solutions. In traditional Hardware-Software co-design methodologies, much of the work has focused on utilizing reconfigurable logic to partition the computation. However, utilizing reconfigurable logic to perform the memory hierarchy design is seldom addressed. In this paper, we propose a new framework for designing memory hierarchy for embedded systems. The framework will take advantage of the flexible reconfigurable logic to customize the memory hierarchy for specific applications. It combines the application optimization and memory hierarchy design together to obtain a global-optimal solution. Using the framework, we performed a case study to design a new software-controlled instruction memory that showed promising potential.
Resumo:
Based on examples provided by 27 graduate psychology faculty, this self-test incorporates many of the more common errors in style, language, and referencing found in student papers. Taking this self-test helps students to recognize common errors and encourages them to refer the APA Publication Manual on a regular basis. In addition, students begin to think about how to use correctly the language of psychological research. This self-test should take about 30 minutes to complete and score. It is composed of three parts: a) a mock Discussion section, where students are asked to act as editors and find the errors, p. 2 (10 minutes). b) a corrected Discussion section, where students find the errors they missed, p. 3 (5 minutes) and, c) a full description of each error with illustrations of correct usage, pp. 4-7 (15 minutes). This exercise assumes some knowledge of APA style. Thus, it is best-suited for advanced undergraduates who need to write research reports and all levels of graduate students. It may be taken at home or in class. Although the self-test is designed to be fully self-directed, instructors may wish to use it at the beginning or end of a classroom discussion on APA style. It could also be used in a pre-test-post-test fashion to evaluate students learning over the course of a term.
Resumo:
Objective: This study offers a description of the kinds of linguistic aids offered by adults while giving explanations to young children about different issues. During the midst of these conversations children have several opportunities to learn, not only from objects and events, but also from the utterances used to refer to them, learning how realities are named after, and in many cases, how to participate in them. Methodology: Through the analysis of eleven minutes of interaction between young child and his maternal grandparents. Results: Adults offers young child different discursive ways for helping him to add information, organize and clarify the topic that he is talking about. Conclusions: The results allow to identify different ways that the adults use for helping young child to improve his explanations.
Resumo:
Previous work has established the value of goal-oriented approaches to requirements engineering. Achieving clarity and agreement about stakeholders’ goals and assumptions is critical for building successful software systems and managing their subsequent evolution. In general, this decision-making process requires stakeholders to understand the implications of decisions outside the domains of their own expertise. Hence it is important to support goal negotiation and decision making with description languages that are both precise and expressive, yet easy to grasp. This paper presents work in progress to develop a pattern language for describing goal refinement graphs. The language has a simple graphical notation, which is supported by a prototype editor tool, and a symbolic notation based on modal logic.
Resumo:
The paper describes the implementation of an offline, low-cost Brain Computer Interface (BCI) alternative to more expensive commercial models. Using inexpensive general purpose clinical EEG acquisition hardware (Truscan32, Deymed Diagnostic) as the base unit, a synchronisation module was constructed to allow the EEG hardware to be operated precisely in time to allow for recording of automatically time stamped EEG signals. The synchronising module allows the EEG recordings to be aligned in stimulus time locked fashion for further processing by the classifier to establish the class of the stimulus, sample by sample. This allows for the acquisition of signals from the subject’s brain for the goal oriented BCI application based on the oddball paradigm. An appropriate graphical user interface (GUI) was constructed and implemented as the method to elicit the required responses (in this case Event Related Potentials or ERPs) from the subject.
Resumo:
Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
This paper, one of a simultaneously published set, describes the establishment in 1990 of the UK standards project for the Pop programming language, and the progress of the project to the end of 1993.
Resumo:
A unique parameterization of the perspective projections in all whole-numbered dimensions is reported. The algorithm for generating a perspective transformation from parameters and for recovering parameters from a transformation is a modification of the Givens orthogonalization algorithm. The algorithm for recovering a perspective transformation from a perspective projection is a modification of Roberts' classical algorithm. Both algorithms have been implemented in Pop-11 with call-out to the NAG Fortran libraries. Preliminary monte-carlo tests show that the transformation algorithm is highly accurate, but that the projection algorithm cannot recover magnitude and shear parameters accurately. However, there is reason to believe that the projection algorithm might improve significantly with the use of many corresponding points, or with multiple perspective views of an object. Previous parameterizations of the perspective transformations in the computer graphics and computer vision literature are discussed.
Resumo:
In 1989, the computer programming language POP-11 is 21 years old. This book looks at the reasons behind its invention, and traces its rise from an experimental language to a major AI language, playing a major part in many innovating projects. There is a chapter on the inventor of the language, Robin Popplestone, and a discussion of the applications of POP-11 in a variety of areas. The efficiency of AI programming is covered, along with a comparison between POP-11 and other programming languages. The book concludes by reviewing the standardization of POP-11 into POP91.