985 resultados para Interactive Techniques
Resumo:
The work reported here lies in the area of overlap between artificial intelligence software engineering. As research in artificial intelligence, it is a step towards a model of problem solving in the domain of programming. In particular, this work focuses on the routine aspects of programming which involve the application of previous experience with similar programs. I call this programming by inspection. Programming is viewed here as a kind of engineering activity. Analysis and synthesis by inspection area prominent part of expert problem solving in many other engineering disciplines, such as electrical and mechanical engineering. The notion of inspections methods in programming developed in this work is motivated by similar notions in other areas of engineering. This work is also motivated by current practical concerns in the area of software engineering. The inadequacy of current programming technology is universally recognized. Part of the solution to this problem will be to increase the level of automation in programming. I believe that the next major step in the evolution of more automated programming will be interactive systems which provide a mixture of partially automated program analysis, synthesis and verification. One such system being developed at MIT, called the programmer's apprentice, is the immediate intended application of this work. This report concentrates on the knowledge are of the programmer's apprentice, which is the form of a taxonomy of commonly used algorithms and data structures. To the extent that a programmer is able to construct and manipulate programs in terms of the forms in such a taxonomy, he may relieve himself of many details and generally raise the conceptual level of his interaction with the system, as compared with present day programming environments. Also, since it is practical to expand a great deal of effort pre-analyzing the entries in a library, the difficulty of verifying the correctness of programs constructed this way is correspondingly reduced. The feasibility of this approach is demonstrated by the design of an initial library of common techniques for manipulating symbolic data. This document also reports on the further development of a formalism called the plan calculus for specifying computations in a programming language independent manner. This formalism combines both data and control abstraction in a uniform framework that has facilities for representing multiple points of view and side effects.
Resumo:
Recent developments in higher education have seen the demise of much didactic, teacher-directed instruction which was aimed mainly towards lower-level educational objectives. This traditional educational approach has been largely replaced by methods which feature the teacher as an originator or facilitator of interactive and learner-centred learning - with higher-level aims in mind. The origins of, and need for, these changes are outlined, leading into an account of the emerging pedagogical approach to interactive learning, featuring facilitation and reflection. Some of the main challenges yet to be confronted effectively in consolidating a sound and comprehensive pedagogical approach to interactive development of higher level educational aims are outlined.
Resumo:
The PIC model by Gati and Asher describes three career decision making stages: pre-screening, in-depth exploration, and choice of career options. We consider the role that three different forms of support (general career support by parents, emotional/instrumental support, and informational support) may play for young adults in each of these three decision-making stages. The authors further propose that different forms of support may predict career agency and occupational engagement, which are important career decision precedents. In addition, we consider the role of personality traits and perceptions (decision-making window) on these two outcomes. Using an online survey sample (N = 281), we found that general career support was important for career agency and occupational engagement. However, it was the combination of higher general career support with either emotional/instrumental support or informational support that was found to lead to both greater career agency and higher occupational engagement. Personality also played a role: Greater proactivity also led to greater occupational engagement, even when there was little urgency for participants to make decisions (window of decision-making was wide open and not restricted). In practical terms, the findings suggest that the learning required in each of the three PIC processes (pre-screening, in-depth exploration, choice of career options may benefit when the learner has access to the three support measures.
Resumo:
M. H. Lee, and S. M. Garrett, Qualitative modelling of unknown interface behaviour, International Journal of Human Computer Studies, Vol. 53, No. 4, pp. 493-515, 2000
Resumo:
Urquhart, C., Light, A., Thomas, R., Barker, A., Yeoman, A., Cooper, J., Armstrong, C., Fenton, R., Lonsdale, R. & Spink, S. (2003). Critical incident technique and explicitation interviewing in studies of information behavior. Library and Information Science Research, 25(1), 63-88. Sponsorship: JISC (for JUSTEIS element)
Resumo:
C.R. Bull, R. Zwiggelaar and R.D. Speller, 'Review of inspection techniques based on the elastic and inelastic scattering of X-rays and their potential in the food and agricultural industry', Journal of Food Engineering 33 (1-2), 167-179 (1997)
Resumo:
Poster pokazuje metody komunikacji z czytelnikiem stosowane w Bibliotece Uniwersyteckiej w Poznaniu w technologii mediów cyfrowych. Cyfrowe narzędzia komunikacji stały się bardzo pomocne, niemal niezbędne w pozyskiwaniu nowych czytelników, podtrzymywaniu i rozwijaniu współpracy w społeczności w sieci Web.2.0, zarówno tej globalnej, jak i lokalnej akademickiej. Strona WWW jako statyczna komunikacyjnie jest wspierana przez fora dyskusyjne, chaty, wideokonferencje, warsztaty informacyjne, które są prowadzone w czasie rzeczywistym. Twórczą siłę relacji społecznych z biblioteką rozwinęły interaktywne serwisy społecznościowe (Facebook) oraz komunikatory internetowe integrowane na platformie Ask a Librarian. Biblioteka stała się Biblioteką 2.0 ukierunkowaną na komunikację z czytelnikiem. Aktywne uczestnictwo i udział czytelników przy tworzeniu zasobów naukowych wdrożyliśmy w projekcie instytucjonalnego repozytorium - Adam Mickiewicz Repository (AMUR). Biblioteka zmienia się dla czytelników i z czytelnikami. Wykorzystywane platformy i serwisy społecznościowe dostarczają unikatowych danych o nowych potrzebach informacyjnych i oczekiwaniach docelowego Patrona 2.0, co skutkuje doskonaleniu usług istniejących i tworzeniu nowych. Biblioteka monitoruje usługi i potrzeby czytelników przez prowadzone badania społeczne. Technologie cyfrowe stosowane w komunikacji sprawiają, iż biblioteka staje się bliższa, bardziej dostępna, aby stać się w rezultacie partnerem dla stałych i nowych czytelników. Biblioteka Uniwersytecka w Poznaniu bierze udział w programach europejskich w zakresie katalogowania i digitalizacji zasobu biblioteki cyfrowej WBC, w zakresie wdrożenia nowych technologii i rozwiązań podnoszących jakość usług bibliotecznych, działalności kulturotwórczej (Poznańska Dyskusyjna Akademia Kominksu, deBiUty) i edukacji informacyjnej. Biblioteka Uniwersytecka w Poznaniu jest członkiem organizacji międzynarodowych: LIBER (Liga Europejskich Bibliotek Naukowych), IAML (Stowarzyszenie Bibliotek Muzycznych, Archiwów i Ośrodków Dokumentacji), CERL - Europejskie Konsorcjum Bibliotek Naukowych.
Resumo:
Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.
Resumo:
Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.
Resumo:
A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
This dissertation introduces and evaluates dramagrammar, a new concept for the teaching and learning of foreign language grammar. Grammar, traditionally taught in a predominantly cognitive, abstract mode, often fails to capture the minds of foreign language learners, who are then unable to integrate this grammatical knowledge into their use of the foreign language in a meaningful way. The consequences of this approach are manifested at university level in German departments in England and Ireland, where the outcomes are unconvincing at best, abysmal at worst. Language teaching research suggests that interaction plays an important role in foreign language acquisition. Recent studies also stress the significance of grammatical knowledge in the learning process. Dramagrammar combines both interactive negotiation of meaning and explicit grammar instruction in a holistic approach, taking up the concept of drama in foreign language education and applying it to the teaching and learning of grammar. Techniques from dramatic art forms allow grammar to be experienced not only cognitively but also in social, emotional, and bodily-kinaesthetic ways. Dramagrammar lessons confront the learner with fictitious situations in which grammar is experienced 'hands-on'. Learners have to use grammatical structures in a variety of contexts, reflect upon their use, and then enlarge and enrich the dramatic situations with their newly acquired or more finely nuanced knowledge. The initial hypothesis of this dissertation is that the drammagrammar approach is beneficial to the acquisition of foreign language grammar. This hypothesis is corroborated by research findings from language teaching pedagogy and drama in education. It is further confirmed by empirical data gained from specifically designed dramagrammar modules that have been put into practice in German departments at the University of Leicester (England), the University Colleges Cork and Dublin (Ireland), the University of Bologna (Italy), as well as the Goethe-Institute Bratislava (Slovenia). The data suggests that drammagrammar has positive effects on both understanding of and attitudes towards grammar.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.