991 resultados para TR-PEEM, XMCD, ultraschnelle Magnetisierungsvorgänge, Permalloy, transiente raumzeitliche Domänenmuster, Spinwellenmode, Streufelder


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Is an interactive new media art installation that explores how the sharing of images, normally hidden on mobile phones, can reveal more about people's sense of place and this ultimately shared experience. Traditional views on sense of place, as exemplified by Wagner (1972) and Relph (1976), characterise the experience as a fusion of meaning, act and context. Indeed, Relph suggests that it is not just the identity of a place that is important, but also the identity that a person or group has with that place, in particular whether they are experiencing it as an â˜insiderâ or â˜outsiderâ. This work stimulates debate concerning the impact of technology on sense of place. Technology offers a number of bridges between the real and virtual worlds, but in so doing places an increased tension on the sense of place and subsequently the identity of the individual. This, coupled with the increased use of camera phones, has enabled the documentation of all aspects of our lives, the things we do, the objects we encounter and the places we inhabit. The installation taps into these hidden electronic resources by letting people share their sense of place associated with a large scale event. The work explores the changing nature of the sense of place of performers, visitors and residents over the duration of the event. Interaction with the installation will transform the viewer into performer, echoing Relphâs insider-outsider dichotomy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SFC FOLLOW-ON VOUCHER The project was undertaken as a SFC Follow-on Voucher (£40K) alongside a student project with BDes (Hons) Design & Digital Arts (D&DA).James Blake (Centre for Media & Culture) brought together students and staff to develop digital content, including films, for a transmedia project and the induction video on the coaches to Ratho. Malcolm Innes, Ian Lambert, Andrew OâDowd, and Euan Winton (Centre for Design Practice & Research) developed the Old Earth Museum (both physical and virtual), and transmedia designer and research student Beata Zemanek oversaw the transmedia strategy and making of the Gatekeeper film, supported by D&DA students and graduates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence of process variables (pea starch, guar gum and glycerol) on the viscosity (V), solubility (SOL), moisture content (MC), transparency (TR), Hunter parameters (L, a, and b), total color difference (ÎE), yellowness index (YI), and whiteness index (WI) of the pea starch based edible films was studied using three factors with three level BoxâBehnken response surface design. The individual linear effect of pea starch, guar and glycerol was significant (p < 0.05) on all the responses. However, a value was only significantly (p < 0.05) affected by pea starch and guar gum in a positive and negative linear term, respectively. The effect of interaction of starch à glycerol was also significant (p < 0.05) on TR of edible films. Interaction between independent variables starch à guar gum had a significant impact on the b and YI values. The quadratic regression coefficient of pea starch showed a significant effect (p < 0.05) on V, MC, L, b, ÎE, YI, and WI; glycerol level on ÎE and WI; and guar gum on ÎE and SOL value. The results were analyzed by Pareto analysis of variance (ANOVA) and the second order polynomial models were developed from the experimental design with reliable and satisfactory fit with the corresponding experimental data and high coefficient of determination (R2) values (>0.93). Three-dimensional response surface plots were established to investigate the relationship between process variables and the responses. The optimized conditions with the goal of maximizing TR and minimizing SOL, YI and MC were 2.5 g pea starch, 25% glycerol and 0.3 g guar gum. Results revealed that pea starch/guar gum edible films with appropriate physical and optical characteristics can be effectively produced and successfully applied in the food packaging industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quadsim is an intermediate code simulator. It allows you to "run" programs that your compiler generates in intermediate code format. Its user interface is similar to most debuggers in that you can step through your program, instruction by instruction, set breakpoints, examine variable values, and so on. The intermediate code format used by Quadsim is that described in [Aho 86]. If your compiler generates intermediate code in this format, you will be able to take intermediate-code files generated by your compiler, load them into the simulator, and watch them "run." You are provided with functions that hide the internal representation of intermediate code. You can use these functions within your compiler to generate intermediate code files that can be read by the simulator. Quadsim was inspired and greatly influenced by [Aho 86]. The material in chapter 8 (Intermediate Code Generation) of [Aho 86] should be considered background material for users of Quadsim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speculative Concurrency Control (SCC) [Best92a] is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In [Best92b], SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Swiss National Science Foundation; Austrian Federal Ministry of Science and Research; Deutsche Forschungsgemeinschaft (SFB 314); Christ Church, Oxford; Oxford University Computing Laboratory

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coherent shared memory is a convenient, but inefficient, method of inter-process communication for parallel programs. By contrast, message passing can be less convenient, but more efficient. To get the benefits of both models, several non-coherent memory behaviors have recently been proposed in the literature. We present an implementation of Mermera, a shared memory system that supports both coherent and non-coherent behaviors in a manner that enables programmers to mix multiple behaviors in the same program[HS93]. A programmer can debug a Mermera program using coherent memory, and then improve its performance by selectively reducing the level of coherence in the parts that are critical to performance. Mermera permits a trade-off of coherence for performance. We analyze this trade-off through measurements of our implementation, and by an example that illustrates the style of programming needed to exploit non-coherence. We find that, even on a small network of workstations, the performance advantage of non-coherence is compelling. Raw non-coherent memory operations perform 20-40~times better than non-coherent memory operations. An example application program is shown to run 5-11~times faster when permitted to exploit non-coherence. We conclude by commenting on our use of the Isis Toolkit of multicast protocols in implementing Mermera.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ⥠2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ⥠2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give a hybrid algorithm for parsing epsilon grammars based on Tomita's non-ϵ-grammar parsing algorithm ([Tom86]) and Nozohoor-Farshi's ϵ-grammar recognition algorithm ([NF91]). The hybrid parser handles the same set of grammars handled by Nozohoor-Farshi's recognizer. The algorithm's details and an example of its use are given. We also discuss the deployment of the hybrid algorithm within a GB parser, and the reason an ϵ grammar parser is needed in our GB parser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By utilizing structure sharing among its parse trees, a GB parser can increase its efficiency dramatically. Using a GB parser which has as its phrase structure recovery component an implementation of Tomita's algorithm (as described in [Tom86]), we investigate how a GB parser can preserve the structure sharing output by Tomita's algorithm. In this report, we discuss the implications of using Tomita's algorithm in GB parsing, and we give some details of the structuresharing parser currently under construction. We also discuss a method of parallelizing a GB parser, and relate it to the existing literature on parallel GB parsing. Our approach to preserving sharing within a shared-packed forest is applicable not only to GB parsing, but anytime we want to preserve structure sharing in a parse forest in the presence of features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ML programming language restricts type polymorphism to occur only in the "let-in" construct and requires every occurrence of a formal parameter of a function (a lambda abstraction) to have the same type. Milner in 1978 refers to this restriction (which was adopted to help ML achieve automatic type inference) as a serious limitation. We show that this restriction can be relaxed enough to allow universal polymorphic abstraction without losing automatic type inference. This extension is equivalent to the rank-2 fragment of system F. We precisely characterize the additional program phrases (lambda terms) that can be typed with this extension and we describe typing anomalies both before and after the extension. We discuss how macros may be used to gain some of the power of rank-3 types without losing automatic type inference. We also discuss user-interface problems in how to inform the programmer of the possible types a program phrase may have.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a GB parser implemented along the lines of those written by Fong [4] and Dorr [2]. The phrase structure recovery component is an implementation of Tomita's generalized LR parsing algorithm (described in [10]), with recursive control flow (similar to Fong's implementation). The major principles implemented are government, binding, bounding, trace theory, case theory, θ-theory, and barriers. The particular version of GB theory we use is that described by Haegeman [5]. The parser is minimal in the sense that it implements the major principles needed in a GB parser, and has fairly good coverage of linguistically interesting portions of the English language.