45 resultados para Transforms,
Resumo:
We investigated the role of visual feedback of task performance in visuomotor adaptation. Participants produced novel two degrees of freedom movements (elbow flexion-extension, forearm pronation-supination) to move a cursor towards visual targets. Following trials with no rotation, participants were exposed to a 60A degrees visuomotor rotation, before returning to the non-rotated condition. A colour cue on each trial permitted identification of the rotated/non-rotated contexts. Participants could not see their arm but received continuous and concurrent visual feedback (CF) of a cursor representing limb position or post-trial visual feedback (PF) representing the movement trajectory. Separate groups of participants who received CF were instructed that online modifications of their movements either were, or were not, permissible as a means of improving performance. Feedforward-mediated performance improvements occurred for both CF and PF groups in the rotated environment. Furthermore, for CF participants this adaptation occurred regardless of whether feedback modifications of motor commands were permissible. Upon re-exposure to the non-rotated environment participants in the CF, but not PF, groups exhibited post-training aftereffects, manifested as greater angular deviations from a straight initial trajectory, with respect to the pre-rotation trials. Accordingly, the nature of the performance improvements that occurred was dependent upon the timing of the visual feedback of task performance. Continuous visual feedback of task performance during task execution appears critical in realising automatic visuomotor adaptation through a recalibration of the visuomotor mapping that transforms visual inputs into appropriate motor commands.
Resumo:
Dynamic power consumption is very dependent on interconnect, so clever mapping of digital signal processing algorithms to parallelised realisations with data locality is vital. This is a particular problem for fast algorithm implementations where typically, designers will have sacrificed circuit structure for efficiency in software implementation. This study outlines an approach for reducing the dynamic power consumption of a class of fast algorithms by minimising the index space separation; this allows the generation of field programmable gate array (FPGA) implementations with reduced power consumption. It is shown how a 50% reduction in relative index space separation results in a measured power gain of 36 and 37% over a Cooley-Tukey Fast Fourier Transform (FFT)-based solution for both actual power measurements for a Xilinx Virtex-II FPGA implementation and circuit measurements for a Xilinx Virtex-5 implementation. The authors show the generality of the approach by applying it to a number of other fast algorithms namely the discrete cosine, the discrete Hartley and the Walsh-Hadamard transforms.
Resumo:
Masonry arches are strong, durable, aesthetically pleasing and largely maintenance free, yet since 1900 there has been a dramatic decline in their use. However, designers, contractors and clients now have access to a new method of constructing arches incorporating precast concrete voussoirs interconnected via polymeric reinforcement and a concrete screed. No centring is necessary, as the FlexiArch, when it is lifted, transforms under the forces of gravity into the desired arch shape. After discussing general aspects of innovation, the basic concept of the arch bridge system is presented along with technological advances since it was patented. Experiences gained from building over 40 FlexiArch bridges in the UK and Ireland and from model and full-scale tests carried out to validate the system during installation and in service are described. Thus under load the system behaves like a traditional masonry arch and existing analysis methods can be used for design and assessment.
Resumo:
In the digital age, the hyperspace of virtual reality systems stands out as a new spatial notion creating a parallel world to the space we live in. In this alternative realm, the body transforms into a hyperbody, and begins to follow the white rabbit. Not only in real world but also in the Matrix world. The Matrix project of Andy and Larry Wachowski started with a feature film released in 1999. However, The Matrix is not only a film (trilogy). It is a concept, a universe that brings real space and hyperspace together. It is a world represented not only in science fiction films but also in The Animatrix that includes nine animated Matrix films directed by Peter Chung, Andy Jones, Yoshiaki Kawajiri and others, four of which are written by the Wachowskis. The same universe is used in Enter the Matrix, a digital game whose script was written and directed by the brothers and a comic book, The Matrix Comics, which includes twelve different stories by artists like Neil Gaiman and Goef Darrow. The Wachowskis played an active role in the creation and realization of all these “products” of different media. The comic book came last (November 2003), however it is possible to argue that everything came out of comics – the storyboards of the original film. After all the Wachowskis have a background in comics.
In this study, I will focus on the formal analysis of the science fiction world of The Matrix - as a representation of hyperspace - in different media, feature film, animated film, digital game and comic book, focusing on diverse forms of space that come into being as a result of medium differences. To unfold the different formal characters of film, animation, game and comics, concepts and features including framing, flattening, continuity, movement, montage, sound/text, light and color will be discussed. An analysis of these products will help to open up a discussion on the relation of form, media and representation.
Resumo:
Wavelet transforms provide basis functions for time-frequency analysis and have properties that are particularly useful for the compression of analogue point on wave transient and disturbance power system signals. This paper evaluates the compression properties of the discrete wavelet transform using actual power system data. The results presented in the paper indicate that reduction ratios up to 10:1 with acceptable distortion are achievable. The paper discusses the application of the reduction method for expedient fault analysis and protection assessment.
Resumo:
The transformation of vaterite into calcite may be performed by heating in the presence and the absence of oxygen. Vaterite remains thermally stable until a calcination temperature of 450°C. It transforms progressively to calcite up to 500°C giving two exothermic peaks: 1) at 481°C due to the transformation of vaterite surface which is in contact with a small amount of calcite phase already formed with the time on the solid surface from the humidity atmosphere; 2) at 491°C due to the transformation of pure vaterite bulk. The calcite phase remains stable until 700°C. Above this temperature the formation of CaO is observed.
Resumo:
Burning seaweed to produce kelp, valued for its high potash and soda content, was formerly a significant industry in remote coastal areas of Scotland and elsewhere. Given the high concentrations of arsenic in seaweeds, up to 100 mg kg(-1), this study investigates the possibility that the kelp industry caused arsenic contamination of these pristine environments. A series of laboratory-scale seaweed burning experiments was conducted, and analysis of the products using HPLC ICP-MS shows that at least 40% of the arsenic originally in the seaweed could have been released into the fumes. The hypothesis that the burning process transforms arsenic from low toxicity arsenosugars in the original seaweeds (Fucus vesiculosus and Laminaria digitata) to highly toxic inorganic forms, predominantly arsenate, is consistent with As speciation analysis results. A field study conducted on Westray, Orkney, once a major centre for kelp production, shows that elevated arsenic levels (10.7+/-3.0 mg kg(-1), compared to background levels of 1.7+/-0.2 mg kg(-1)) persist in soils in the immediate vicinity of the kelp burning pits. A model combining results from the burning experiments with data from historical records demonstrates the potential for arsenic deposition of 47 g ha(-1) year(-1) on land adjacent to the main kelp burning location on Westray, and for arsenic concentrations exceeding current UK soil guideline values during the 50 year period of peak kelp production.
Resumo:
A systolic array is an array of individual processing cells each of which has some local memory and is connected only to its nearest neighbours in the form of a regular lattice. On each cycle of a simple clock every cell receives data from its neighbouring cells and performs a specific processing operation on it. The resulting data is stored within the cell and passed on to neighbouring cells on the next clock cycle. This paper gives an overview of work to date and illustrates the application of bit-level systolic arrays by means of two examples: (1) a pipelined bit-slice circuit for computing matrix x vector transforms; and (2) a bit serial structure for multi-bit convolution.
Resumo:
A bit level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest-neighbor interconnections, regularity and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform a short length transform. These components build into longer transforms preserving the regularity and structure of the short length transform design.
Resumo:
A bit-level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest neighbor interconnections, regularity, and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform short-length transforms. These components build into longer transforms, preserving the regularity and structure of the short-length transform design.
Resumo:
Many plain text information hiding techniques demand deep semantic processing, and so suffer in reliability. In contrast, syntactic processing is a more mature and reliable technology. Assuming a perfect parser, this paper evaluates a set of automated and reversible syntactic transforms that can hide information in plain text without changing the meaning or style of a document. A large representative collection of newspaper text is fed through a prototype system. In contrast to previous work, the output is subjected to human testing to verify that the text has not been significantly compromised by the information hiding procedure, yielding a success rate of 96% and bandwidth of 0.3 bits per sentence. © 2007 SPIE-IS&T.
Resumo:
We break down photoelectron diffraction intensities into four terms in analogy to optical holography and discuss the effect of each term on reconstructed images. The second term involving products of scattered waves SIGMA-SIGMA-O(i)O(j)*, is in this case not structure-less. Theoretical analysis and simulations demonstrate that this term may lead to spurious features in real space images in holographic transforms of medium energy electron diffraction patterns. If it is small enough the problem may be overcome by an iterative correction process.
Resumo:
This workshop draws on an emerging collaborative body of research by Lovett, Morrow and McClean that aims to understand architecture and its processes as a form of pedagogical practice: a civic pedagogy.
Architectural education can be valued not only as a process that delivers architecture-specific skills and knowledges, but also as a process that transforms people into critically active contributors to society. We are keen to examine how and where those skills are developed in architectural education and trace their existence and/or application within practice. We intend to examine whether some architectural and spatial practices are intrinsically pedagogical in their nature and how the level of involvement of clients, users and communities can mimic the project-based learning of architectural education – in particularly in the context of ‘live project learning’
1. This workshop begins with a brief discussion paper from Morrow that sets out the arguments behind why and how architecture can be understood as pedagogy. It will do so by presenting firstly the relationship between architectural practice and pedagogy, drawing out both contemporary and historical examples of architecture and architects acting pedagogically. It will also consider some other forms of creative practice that explicitly frame themselves pedagogically, and focus on participatory approaches in architectural practice that overlap with inclusive and live pedagogies, concluding with a draft and tentative abstracted pedagogical framework for architectural practice.
2. Lovett will examine practices of architectural operation that have a pedagogical approach, or which recognise within themselves an educational subtext/current. He is most interested in a 'liveness' beyond the 'Architectural Education' of university institutions. The presentation will question the scope for both spatial empowerment / agency and a greater understanding and awareness of the value of good design when operating as architects with participant-clients younger than 18, older than 25 or across varied parts of society. Positing that the learning might be greatest when there are no prescribed 'Learning Outcomes' and that such work might depend on risk-taking and playfulness, the presentation will be a curated showcase drawing on his own ongoing work.
Both brief presentations will inform the basis of the workshop’s discussion which hopes to draw on participants views and expereinces to enrich the research process. The intention is that the overall workshop will lead to a call for contributors and respondents to a forthcoming publication on ‘Architecture as Pedagogy’.
Resumo:
BACKGROUND: Smart tags attached to freely-roaming animals recording multiple parameters at infra-second rates are becoming commonplace, and are transforming our understanding of the way wild animals behave. Interpretation of such data is complex and currently limits the ability of biologists to realise the value of their recorded information.
DESCRIPTION: This work presents Framework4, an all-encompassing software suite which operates on smart sensor data to determine the 4 key elements considered pivotal for movement analysis from such tags (Endangered Species Res 4: 123-37, 2008). These are; animal trajectory, behaviour, energy expenditure and quantification of the environment in which the animal moves. The program transforms smart sensor data into dead-reckoned movements, template-matched behaviours, dynamic body acceleration-derived energetics and position-linked environmental data before outputting it all into a single file. Biologists are thus left with a single data set where animal actions and environmental conditions can be linked across time and space.
CONCLUSIONS: Framework4 is a user-friendly software that assists biologists in elucidating 4 key aspects of wild animal ecology using data derived from tags with multiple sensors recording at high rates. Its use should enhance the ability of biologists to derive meaningful data rapidly from complex data.
Resumo:
Aims. Projected rotational velocities (ve sin i) have been estimated for 334 targets in the VLT-FLAMES Tarantula Survey that do not manifest significant radial velocity variations and are not supergiants. They have spectral types from approximately O9.5 to B3. The estimates have been analysed to infer the underlying rotational velocity distribution, which is critical for understanding the evolution of massive stars. Methods. Projected rotational velocities were deduced from the Fourier transforms of spectral lines, with upper limits also being obtained from profile fitting. For the narrower lined stars, metal and non-diffuse helium lines were adopted, and for the broader lined stars, both non-diffuse and diffuse helium lines; the estimates obtained using the different sets of lines are in good agreement. The uncertainty in the mean estimates is typically 4% for most targets. The iterative deconvolution procedure of Lucy has been used to deduce the probability density distribution of the rotational velocities. Results. Projected rotational velocities range up to approximately 450 kms-1 and show a bi-modal structure. This is also present in the inferred rotational velocity distribution with 25% of the sample having 0 <ve <100 km s-1 and the high velocity component having ve ∼ 250 km s-1. There is no evidence from the spatial and radial velocity distributions of the two components that they represent either field and cluster populations or different episodes of star formation. Be-type stars have also been identified. Conclusions. The bi-modal rotational velocity distribution in our sample resembles that found for late-B and early-A type stars.While magnetic braking appears to be a possible mechanism for producing the low-velocity component, we can not rule out alternative explanations. © ESO 2013.