918 resultados para sonic object
Resumo:
Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.
Resumo:
The use of museum collections as a path to learning for university students is fast becoming a new pedagogy for higher education. Despite a strong tradition of using lectures as a way of delivering the curriculum, the positive benefits of ‘active’ and ‘experiential learning’ are being recognised in universities at both a strategic level and in daily teaching practice. As museum artefacts, specimens and art works are used to evoke, provoke, and challenge students’ engagement with their subject, so transformational learning can take place. This unique book presents the first comprehensive exploration of ‘object-based learning’ as a pedagogy for higher education in a broad context. An international group of authors offer a spectrum of approaches at work in higher education today. They explore contemporary principles and practice of object-based learning in higher education, demonstrating the value of using collections in this context and considering the relationship between academic discipline and object-based learning as a teaching strategy.
Resumo:
There is a perception amongst some of those learning computer programming that the principles of object-oriented programming (where behaviour is often encapsulated across multiple class files) can be difficult to grasp, especially when taught through a traditional, didactic ‘talk-and-chalk’ method or in a lecture-based environment.
We propose a non-traditional teaching method, developed for a government funded teaching training project delivered by Queen’s University, we call it bigCode. In this scenario, learners are provided with many printed, poster-sized fragments of code (in this case either Java or C#). The learners sit on the floor in groups and assemble these fragments into the many classes which make-up an object-oriented program.
Early trials indicate that bigCode is an effective method for teaching object-orientation. The requirement to physically organise the code fragments imitates closely the thought processes of a good software developer when developing object-oriented code.
Furthermore, in addition to teaching the principles involved in object-orientation, bigCode is also an extremely useful technique for teaching learners the organisation and structure of individual classes in Java or C# (as well as the organisation of procedural code). The mechanics of organising fragments of code into complete, correct computer programs give the users first-hand practice of this important skill, and as a result they subsequently find it much easier to develop well-structured code on a computer.
Yet, open questions remain. Is bigCode successful only because we have unknowingly predominantly targeted kinesthetic learners? Is bigCode also an effective teaching approach for other forms of learners, such as visual learners? How scalable is bigCode: in its current form can it be used with large class sizes, or outside the classroom?
Resumo:
The YSOVAR (Young Stellar Object VARiability) Spitzer Space Telescope observing program obtained the first extensive mid-infrared (3.6 and 4.5 μm) time series photometry of the Orion Nebula Cluster plus smaller footprints in 11 other star-forming cores (AFGL 490, NGC 1333, Mon R2, GGD 12-15, NGC 2264, L1688, Serpens Main, Serpens South, IRAS 20050+2720, IC 1396A, and Ceph C). There are ~29,000 unique objects with light curves in either or both IRAC channels in the YSOVAR data set. We present the data collection and reduction for the Spitzer and ancillary data, and define the "standard sample" on which we calculate statistics, consisting of fast cadence data, with epochs roughly twice per day for ~40 days. We also define a "standard sample of members" consisting of all the IR-selected members and X-ray-selected members. We characterize the standard sample in terms of other properties, such as spectral energy distribution shape. We use three mechanisms to identify variables in the fast cadence data—the Stetson index, a χ2 fit to a flat light curve, and significant periodicity. We also identified variables on the longest timescales possible of six to seven years by comparing measurements taken early in the Spitzer mission with the mean from our YSOVAR campaign. The fraction of members in each cluster that are variable on these longest timescales is a function of the ratio of Class I/total members in each cluster, such that clusters with a higher fraction of Class I objects also have a higher fraction of long-term variables. For objects with a YSOVAR-determined period and a [3.6]-[8] color, we find that a star with a longer period is more likely than those with shorter periods to have an IR excess. We do not find any evidence for variability that causes [3.6]-[4.5] excesses to appear or vanish within our data set; out of members and field objects combined, at most 0.02% may have transient IR excesses.
Resumo:
The modulation of neural activity in visual cortex is thought to be a key mechanism of visual attention. The investigation of attentional modulation in high-level visual areas, however, is hampered by the lack of clear tuning or contrast response functions. In the present functional magnetic resonance imaging study we therefore systematically assessed how small voxel-wise biases in object preference across hundreds of voxels in the lateral occipital complex were affected when attention was directed to objects. We found that the strength of attentional modulation depended on a voxel's object preference in the absence of attention, a pattern indicative of an amplificatory mechanism. Our results show that such attentional modulation effectively increased the mutual information between voxel responses and object identity. Further, these local modulatory effects led to improved information-based object readout at the level of multi-voxel activation patterns and to an increased reproducibility of these patterns across repeated presentations. We conclude that attentional modulation enhances object coding in local and distributed object representations of the lateral occipital complex.
Resumo:
The Trembling Line is a film and multi-channel sound installation exploring the visual and acoustic echoes between decipherable musical gestures and abstract patterning, orchestral swells and extreme high-speed slow-motion close-ups of strings and percussion. It features a score by Leo Grant and a newly devised multichannel audio system by the Institute of Sound and Vibration Research, University of Southampton. The multi-channel speaker array is devised as an intimate sound spatialisation system in which each element of sound can be pried apart and reconfigured, to create a dynamically disorienting sonic experience. It becomes the inside of a musical instrument, an acoustic envelope or cage of sorts, through which viewers are invited to experience the film and generate cross-sensory connections and counterpoints between the sound and the visuals. Funded by a Leverhulme Artist-in-Residence Award and John Hansard Gallery, with support from ISVR and the Music Department, University of Southampton. The project provided a rare opportunity to work creatively with new cutting edge developments in sound distribution devised by ISVR, devising a new speaker array, a multi- channel surround listening sphere which spatialises the auditory experience. The sphere is currently used by ISVR for outreach and teaching purposes, and has enables future collaborations between music staff and students at Southampton University and staff and ISVR. Exhibitions: Solo exhibition at John Hansard Gallery, Southampton (Dec 2015-Jan 2016), across 5 rooms, including a retrospective of five previous film-works and a new series of photographic stills. Public lectures: two within the gallery. Reviews and interviews: Art Monthly, Studio International, The Quietus, The Wire Magazine.
Resumo:
Contemporary studies of spatial and social cognition frequently use human figures as stimuli. The interpretation of such studies may be complicated by spatial compatibility effects that emerge when researchers employ spatial responses, and participants spontaneously code spatial relationships about an observed body. Yet, the nature of these spatial codes – whether they are location- or object-based, and coded from the perspective of the observer or the figure – has not been determined. Here, we investigated this issue by exploring spatial compatibility effects arising for objects held by a visually presented whole-bodied schematic human figure. In three experiments, participants responded to the colour of the object held in the figure’s left or right hand, using left or right key presses. Left-right compatibility effects were found relative to the participant’s egocentric perspective, rather than the figure’s. These effects occurred even when the figure was rotated by 90 degrees to the left or to the right, and the coloured objects were aligned with the participant’s midline. These findings are consistent with spontaneous spatial coding from the participant’s perspective and relative to the normal upright orientation of the body. This evidence for object-based spatial coding implies that the domain general cognitive mechanisms that result in spatial compatibility effects may contribute to certain spatial perspective-taking and social cognition phenomena.
Resumo:
Object categorisation is linked to detection, segregation and recognition. In the visual system, these processes are achieved in the ventral \what"and dorsal \where"pathways [3], with bottom-up feature extractions in areas V1, V2, V4 and IT (what) in parallel with top-down attention from PP via MT to V2 and V1 (where). The latter is steered by object templates in memory, i.e. in prefrontal cortex with a what component in PF46v and a where component in PF46d.
Resumo:
Keypoints (junctions) provide important information for focus-of-attention (FoA) and object categorization/recognition. In this paper we analyze the multi-scale keypoint representation, obtained by applying a linear and quasi-continuous scaling to an optimized model of cortical end-stopped cells, in order to study its importance and possibilities for developing a visual, cortical architecture.We show that keypoints, especially those which are stable over larger scale intervals, can provide a hierarchically structured saliency map for FoA and object recognition. In addition, the application of non-classical receptive field inhibition to keypoint detection allows to distinguish contour keypoints from texture (surface) keypoints.