15 resultados para Archive of Underwater Imaging
em CentAUR: Central Archive University of Reading - UK
Resumo:
Group exhibition of artist's books including Frozen Tears I - III (2003-7) curated by Gregorio Magnani.
Resumo:
Group exhibition of artists books curated by Gregorio Magnani, including Frozen Tears I - III (2003-7). Including books by Kasper Andreasen, Linus Bill and Adrien Horni, blisterZine, Daniel Gustav Cramer, Arnaud Desjardin, Michael Dean, Karl Holmqvist, Louis Luthi, Sara MacKillop, Dan Mitchell, Kristen Mueller, Sophie Nys, Simon Popper, Preston is my Paris, Alessandro Roma, Karin Ruggaber, John Russell, Erik Steinbrecher, Peter Tillessen, Erik van der Weijde
Resumo:
In rapid scan Fourier transform spectrometry, we show that the noise in the wavelet coefficients resulting from the filter bank decomposition of the complex insertion loss function is linearly related to the noise power in the sample interferogram by a noise amplification factor. By maximizing an objective function composed of the power of the wavelet coefficients divided by the noise amplification factor, optimal feature extraction in the wavelet domain is performed. The performance of a classifier based on the output of a filter bank is shown to be considerably better than that of an Euclidean distance classifier in the original spectral domain. An optimization procedure results in a further improvement of the wavelet classifier. The procedure is suitable for enhancing the contrast or classifying spectra acquired by either continuous wave or THz transient spectrometers as well as for increasing the dynamic range of THz imaging systems. (C) 2003 Optical Society of America.
Resumo:
In this paper, we discuss the problem of globally computing sub-Riemannian curves on the Euclidean group of motions SE(3). In particular, we derive a global result for special sub-Riemannian curves whose Hamiltonian satisfies a particular condition. In this paper, sub-Riemannian curves are defined in the context of a constrained optimal control problem. The maximum principle is then applied to this problem to yield an appropriate left-invariant quadratic Hamiltonian. A number of integrable quadratic Hamiltonians are identified. We then proceed to derive convenient expressions for sub-Riemannian curves in SE(3) that correspond to particular extremal curves. These equations are then used to compute sub-Riemannian curves that could potentially be used for motion planning of underwater vehicles.
Resumo:
Using the recently-developed mean–variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, an analysis is presented of the spatiotemporal dynamics of their perturbations, showing how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. In particular, a divide is seen between ensembles based on singular vectors or empirical orthogonal functions, and those based on bred vector, Ensemble Transform with Rescaling or Ensemble Kalman Filter techniques. Consideration is also given to the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. Finally, the use of the MVL technique to assist in selecting models for inclusion in a multi-model ensemble is discussed, and an experiment suggested to test its potential in this context.
Resumo:
Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
This paper contextualises the framework and methodology for producing the video performance Ballet, by Szuper Gallery (Susanne Clausen & Pavlo Kerestey), which was initiated through an encounter with an archive of rural information and propaganda films from the Museum of English Rural Life [MERL] in Reading, UK. This project looked at ways of extrapolating filmed gestures from the MERL films to choreograph a large-scale performance film and to consider how this practice-led research could instigate a new way of engaging with and interpreting the MERL film collection. The resulting video was produced in 2009 and was first exhibited at MERL, where it became part of the archive. This was followed by a series of international screenings. I will set out the surrounding research in and around the archive propaganda films, focusing on the performances by rural extras (background actors) in these films, while looking at the way one could understand the relation between a future-past, or tradition and accident in these films (Massumi, 1993). I will pair this with a reflection on the cultural reading of the extras (Didi-Huberman, 2009) and the notion of social choreography (Hewitt, 2005) in this context. I will then lay out reflections on artistic methods for the final performance, a Crash Choreography, based on calculated, but spontaneous encounters.
Resumo:
This book investigates the challenges that the presence of digital imaging within the cinematic frame can pose for the task of interpretation. Applying close textual analysis to a series of case studies, the book demystifies the relationship of digital imaging to processes of watching and reading films, and develops a methodology for approaching the digital in popular cinema. In doing so, the study places contemporary digital imaging practice in relation to historical traditions of filmmaking and special effects practice, and proposes a fresh, flexible approach the the close reading of film that can take appropriate account of the presence of the digital.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
This study investigated the contribution of stereoscopic depth cues to the reliability of ordinal depth judgments in complex natural scenes. Participants viewed photographs of cluttered natural scenes, either monocularly or stereoscopically. On each trial, they judged which of two indicated points in the scene was closer in depth. We assessed the reliability of these judgments over repeated trials, and how well they correlated with the actual disparities of the points between the left and right eyes' views. The reliability of judgments increased as their depth separation increased, was higher when the points were on separate objects, and deteriorated for point pairs that were more widely separated in the image plane. Stereoscopic viewing improved sensitivity to depth for points on the same surface, but not for points on separate objects. Stereoscopic viewing thus provides depth information that is complementary to that available from monocular occlusion cues.