631 resultados para algorithmic skeletons
Resumo:
We discuss three approaches to the use of technology as a teaching and learning tool that we are currently implementing for a target group of about one hundred second level engineering mathematics students. Central to these approaches is the underlying theme of motivating relatively poorly motivated students to learn, with the aim of improving learning outcomes. The approaches to be discussed have been used to replace, in part, more traditional mathematics tutorial sessions and lecture presentations. In brief, the first approach involves the application of constructivist thinking in the tertiary education arena, using technology as a computational and visual tool to create motivational knowledge conflicts or crises. The central idea is to model a realistic process of how scientific theory is actually developed, as proposed by Kuhn (1962), in contrast to more standard lecture and tutorial presentations. The second approach involves replacing procedural or algorithmic pencil-and-paper skills-consolidation exercises by software based tasks. Finally, the third approach aims at creating opportunities for higher order thinking via "on-line" exploratory or discovery mode tasks. The latter incorporates the incubation period method, as originally discussed by Rubinstein (1975) and others.
Resumo:
Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.
Resumo:
Social media platforms risk polarising public opinions by employing proprietary algorithms that produce filter bubbles and echo chambers. As a result, the ability of citizens and communities to engage in robust debate in the public sphere is diminished. In response, this paper highlights the capacity of urban interfaces, such as pervasive displays, to counteract this trend by exposing citizens to the socio-cultural diversity of the city. Engagement with different ideas, networks and communities is crucial to both innovation and the functioning of democracy. We discuss examples of urban interfaces designed to play a key role in fostering this engagement. Based on an analysis of works empirically-grounded in field observations and design research, we call for a theoretical framework that positions pervasive displays and other urban interfaces as civic media. We argue that when designed for more than wayfinding, advertisement or television broadcasts, urban screens as civic media can rectify some of the pitfalls of social media by allowing the polarised user to break out of their filter bubble and embrace the cultural diversity and richness of the city.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
This paper gives a new iterative algorithm for kernel logistic regression. It is based on the solution of a dual problem using ideas similar to those of the Sequential Minimal Optimization algorithm for Support Vector Machines. Asymptotic convergence of the algorithm is proved. Computational experiments show that the algorithm is robust and fast. The algorithmic ideas can also be used to give a fast dual algorithm for solving the optimization problem arising in the inner loop of Gaussian Process classifiers.
Resumo:
This contribution focuses on the accelerated loss of traditional sound patterning in music, parallel to the exponential loss of linguistic and cultural variety in a world increasingly 'globalized' by market policies and economic liberalization, in which scientific or technical justification plays a crucial role. As a suggestion to an alternative trend, composers and music theorists are invited to explore the world of design and patterning by grammar rules from non-dominant cultures, and to make an effort to understand their contextual usage and its transformation, in order to appreciate their symbolism and aesthetic depth. Practical examples are provided.
Resumo:
We present the theoretical foundations for the multiple rendezvous problem involving design of local control strategies that enable groups of visibility-limited mobile agents to split into subgroups, exhibit simultaneous taxis behavior towards, and eventually rendezvous at, multiple unknown locations of interest. The theoretical results are proved under certain restricted set of assumptions. The algorithm used to solve the above problem is based on a glowworm swarm optimization (GSO) technique, developed earlier, that finds multiple optima of multimodal objective functions. The significant difference between our work and most earlier approaches to agreement problems is the use of a virtual local-decision domain by the agents in order to compute their movements. The range of the virtual domain is adaptive in nature and is bounded above by the maximum sensor/visibility range of the agent. We introduce a new decision domain update rule that enhances the rate of convergence by a factor of approximately two. We use some illustrative simulations to support the algorithmic correctness and theoretical findings of the paper.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
Extracting features from point-based representations of geometric surface models is becoming increasingly important for purposes such as model classification, matching, and exploration. In an earlier paper, we proposed a multiphase segmentation process to identify elongated features in point-sampled surface models without the explicit construction of a mesh or other surface representation. The preliminary results demonstrated the strength and potential of the segmentation process, but the resulting segmentations were still of low quality, and the segmentation process could be slow. In this paper, we describe several algorithmic improvements to overcome the shortcomings of the segmentation process. To demonstrate the improved quality of the segmentation and the superior time efficiency of the new segmentation process, we present segmentation results obtained for various point-sampled surface models. We also discuss an application of our segmentation process to extract ridge-separated features in point-sampled surfaces of CAD models.
Resumo:
The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.
Resumo:
ASICs offer the best realization of DSP algorithms in terms of performance, but the cost is prohibitive, especially when the volumes involved are low. However, if the architecture synthesis trajectory for such algorithms is such that the target architecture can be identified as an interconnection of elementary parameterized computational structures, then it is possible to attain a close match, both in terms of performance and power with respect to an ASIC, for any algorithmic parameters of the given algorithm. Such an architecture is weakly programmable (configurable) and can be viewed as an application specific integrated processor (ASIP). In this work, we present a methodology to synthesize ASIPs for DSP algorithms. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Resumo:
A new approach to machine representation and analysis of three-dimensional objects is presented. The representation, based on the notion of "skeleton" of an object leads to a scheme for comparing two given object views for shape relations. The objects are composed of long, thin, rectangular prisms joined at their ends. The input picture to the program is the digitized line drawing portraying the three-dimensional object. To compare two object views, two characteristic vertices called "cardinal point" and "end-cardinal point," occurring consistently at the bends and open ends of the object are detected. The skeletons are then obtained as a connected path passing through these points. The shape relationships between the objects are then obtained from the matching characteristics of their skeletons. The method explores the possibility of a more detailed and finer analysis leading to detection of features like symmetry, asymmetry and other shape properties of an object.