90 resultados para Poetic of the image
Resumo:
Examining the evolution of British and Australian policing, this comparative review of the literature considers the historical underpinnings of policing in these two countries and the impact of community legitimacy derived from the early concepts of policing by consent. Using the August 2011 disorder in Britain as a lens, this paper considers whether, in striving to maintain community confidence, undue emphasis is placed on the police's public image at the expense of community safety. Examining the path of policing reform, the impact of bureaucracy on policing and the evolving debate surrounding police performance, this review suggests that, while largely delivering on the ideal of an ethical and strong police force, a preoccupation with self-image may in fact result in tarnishing the very thing British and Australian police forces strive to achieve – their standing with the public. This paper advocates for a more realistic goal of gaining public respect rather than affection in order to achieve the difficult balance between maintaining trust and respect as an approachable, ethical entity providing firm, confident policing in this ever-evolving, modern society.
Resumo:
There are several methods for determining the proteoglycan content of cartilage in biomechanics experiments. Many of these include assay-based methods and the histochemistry or spectrophotometry protocol where quantification is biochemically determined. More recently a method based on extracting data to quantify proteoglycan content has emerged using the image processing algorithms, e.g., in ImageJ, to process histological micrographs, with advantages including time saving and low cost. However, it is unknown whether or not this image analysis method produces results that are comparable to those obtained from the biochemical methodology. This paper compares the results of a well-established chemical method to those obtained using image analysis to determine the proteoglycan content of visually normal (n=33) and their progressively degraded counterparts with the protocols. The results reveal a strong linear relationship with a regression coefficient (R2) = 0.9928, leading to the conclusion that the image analysis methodology is a viable alternative to the spectrophotometry.
Resumo:
Texture enhancement is an important component of image processing that finds extensive application in science and engineering. The quality of medical images, quantified using the imaging texture, plays a significant role in the routine diagnosis performed by medical practitioners. Most image texture enhancement is performed using classical integral order differential mask operators. Recently, first order fractional differential operators were used to enhance images. Experimentation with these methods led to the conclusion that fractional differential operators not only maintain the low frequency contour features in the smooth areas of the image, but they also nonlinearly enhance edges and textures corresponding to high frequency image components. However, whilst these methods perform well in particular cases, they are not routinely useful across all applications. To this end, we apply the second order Riesz fractional differential operator to improve upon existing approaches of texture enhancement. Compared with the classical integral order differential mask operators and other first order fractional differential operators, we find that our new algorithms provide higher signal to noise values and superior image quality.
Resumo:
Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.
Resumo:
Sounds of the Suburb was a commissioned public art proposal based upon a brief set by Queensland Rail for the major redevelopment at their Brunswick Street Railway Station, Fortitude Valley, Brisbane. I proposed a large scale, electronic artwork to be distributed across the glass fronted structure of their station’s new concourse building. It was designed as a network of LED based ‘tracking’ - along which would travel electronically animated, ‘trains’ of text synchronised to the actual train timetables. Each message packet moved endlessly through a complex spatial network of ‘tracks’ and ‘stations’ set both inside, outside and via the concourse. The design was underpinned by large scale image of sound waves etched onto the architecture’s glass and was accompanied by two inset monitors each presenting ghosted images of passenger movements within the concourse, time-delay recorded and then cross-combined in realtime to form new composites.----- Each moving, reprogrammable phrase was conceived as a ‘train of thought’ and ostensibly contained an idea or concept about popular cultures surrounding contemporary music – thereby meeting the brief that the work should speak to the diverse musical cultures central to Fortitude Valley’s image as an entertainment hub. These cultural ‘memes’, gathered from both passengers and the music press were situated alongside quotes from philosophies of networking, speed and digital ecologies. These texts would continually propagate, replicate and cross fertlise as they moved throughout the ‘network’, thereby writing a constantly evolving ‘textual soundcape’ of that place. This idea was further cemented through the pace, scale and rhythm of passenger movements continually recorded and re-presented on the smaller screens.
Resumo:
The requirement to monitor the rapid pace of environmental change due to global warming and to human development is producing large volumes of data but placing much stress on the capacity of ecologists to store, analyse and visualise that data. To date, much of the data has been provided by low level sensors monitoring soil moisture, dissolved nutrients, light intensity, gas composition and the like. However, a significant part of an ecologist’s work is to obtain information about species diversity, distributions and relationships. This task typically requires the physical presence of an ecologist in the field, listening and watching for species of interest. It is an extremely difficult task to automate because of the higher order difficulties in bandwidth, data management and intelligent analysis if one wishes to emulate the highly trained eyes and ears of an ecologist. This paper is concerned with just one part of the bigger challenge of environmental monitoring – the acquisition and analysis of acoustic recordings of the environment. Our intention is to provide helpful tools to ecologists – tools that apply information technologies and computational technologies to all aspects of the acoustic environment. The on-line system which we are building in conjunction with ecologists offers an integrated approach to recording, data management and analysis. The ecologists we work with have different requirements and therefore we have adopted the toolbox approach, that is, we offer a number of different web services that can be concatenated according to need. In particular, one group of ecologists is concerned with identifying the presence or absence of species and their distributions in time and space. Another group, motivated by legislative requirements for measuring habitat condition, are interested in summary indices of environmental health. In both case, the key issues are scalability and automation.
Resumo:
Introduction: Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (Nmm−1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived froma single 2D radiographic image. Methods: 18 excised human femora had previously been quantitative computed tomography scanned, from which 2D BMD-equivalent radiographic images were derived, and mechanically tested to failure in a stance-loading configuration. A 3D proximal femur shape was generated from each 2D radiographic image and used to construct 3D-FEA models. Results: The coefficient of determination (R2%) to predict failure load was 54.5% for BMD and 80.4% for 3D-FEXI. Conclusions: This ex vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD. This approach may be readily extended to routine clinical BMD images derived by dual energy X-ray absorptiometry. Crown Copyright © 2009 Published by Elsevier Ltd on behalf of IPEM. All rights reserved
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
This study focuses on trends in contemporary Australian playwrighting, discussing recent investigations into the playwrighting process. The study analyses the current state of this country’s playwrighting industry, with a particular focus on programming trends since 1998. It seeks to explore the implications of this current theatrical climate, in particular the types of work most commonly being favoured for production. It argues that Australian plays are under-represented (compared to non-Australian plays) on ‘mainstream’ stages and that audiences might benefit from more challenging modes of writing than the popular three-act realist play models. The thesis argues that ‘New Lyricism’ might fill this position of offering an innovative Australian playwrighting mode. New Lyricism is characterised by a set of common aesthetics, including a non-linear narrative structure, a poetic use of language and magic realism. Several Australian playwrights who have adopted this mode of writing are identified and their works examined. The author’s play Floodlands is presented as a case study and the author’s creative process is examined in light of the published critical discussions about experimental playwriting work.
Resumo:
Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (N mm− 1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived from a single 2D radiographic image. This ex-vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD.
Resumo:
Online Nail Artist (ONA) project aims to create a web-based application for nail salon customers. The application will help customers to customize their hands virtually and find suitable nail colors. The main research question is to reconfigure user experience in relation to product service in terms of customization of user needs. As results, the key function of the application will be to customize a virtual hand image by selecting a matched skin tone, a nail length, and a nail shape in accordance with their hands. The objectives of the project proceeding are to 1) identify customers’ experience in relation to the product features through preliminary research on existing products; 2) create a conceptual framework of the project development in order to reflect the user experience identified; and 3) present a mock up which include key features of the ONA for the future development.
Resumo:
We describe the design and evaluation of a platform for networks of cameras in low-bandwidth, low-power sensor networks. In our work to date we have investigated two different DSP hardware/software platforms for undertaking the tasks of compression and object detection and tracking. We compare the relative merits of each of the hardware and software platforms in terms of both performance and energy consumption. Finally we discuss what we believe are the ongoing research questions for image processing in WSNs.
Resumo:
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.
Resumo:
Architecture for a Free Subjectivity reformulates the French philosopher Gilles Deleuze's model of subjectivity for architecture, by surveying the prolific effects of architectural encounter, and the spaces that figure in them. For Deleuze and his Lacanian collaborator Félix Guattari, subjectivity does not refer to a person, but to the potential for and event of matter becoming subject, and the myriad ways for this to take place. By extension, this book theorizes architecture as a self-actuating or creative agency for the liberation of purely "impersonal effects." Imagine a chemical reaction, a riot in the banlieues, indeed a walk through a city. Simone Brott declares that the architectural object does not merely take part in the production of subjectivity, but that it constitutes its own.