178 resultados para Shape Representation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates Theatre for Young People (TYP) as a site of performance innovation. The inquiry is focused on contemporary dramaturgy and its fieldwork aims to identify new dramaturgical principles operating in the creation and presentation of TYP. The research then seeks to assess how these new principles contribute to Postdramatic Theatre theory. This research inquiry springs from an imperative based in practice: Young people under 25 years have a literacy based on online hypertextual experiences which take the reader outside the frames of a dramatic narrative and beyond principles such as linearity, dramatic unity, teleology and resolution. As a dramaturg and educator I wanted to understand the new ways that young people engage in cultural products, to identify and utilize the new principles of dramaturgy that are now in evidence. My research examines how two playwright/directors approach their work and the new principles that can be identified in their dramaturgy. The fieldwork is scoped into two case studies: the first on TJ Eckleberg working in Australian Theatre for Young People and the second on Kristo Šagor working in German Children’s and Young People’s Theatre (KJT). These case studies address both types of production dramaturgy - the dramaturgy emergent through process in devised performance making, and that emergent in a performance based on a written playscript. On Case Study One the researcher, as participant observer, worked as production dramaturg on a large scale, site specific performance, observing the dramaturgy in process of its director and chief devisor. On Case Study Two the researcher, as observer and analyst, undertook a performance analysis of three playscripts and productions by a contemporary German playwright and director. Utilizing participant observation, reflective practice and grounded analysis the case studies have identified two new principles animating the dramaturgy of these TYP practitioners, namely ‘displacement’ and ‘installation.’ Taking practice into theory, the thesis concludes by demonstrating how displacement and installation contribute to Postdramatic Theatre’s “arsenal of expressive gestures which serve as theatre’s response to changed social communication under the conditions of generalized communication technologies” (Lehmann, H.-T., 2006, p.23). This research makes an original contribution to knowledge by evidencing that the principles of Postdramatic Theory lie within the practice of contemporary Theatre for Young People. It also contributes valuable research to a specialized, often overlooked terrain, namely Dramaturgy in Theatre for Young People, presented here with a contemporary, international and intercultural perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we introduce the term “energy polarization” to explain the politics of energy market reform in the Russian Duma. Our model tests the impact of regional energy production, party cohesion and ideology, and electoral mandate on the energy policy decisions of the Duma deputies (oil, gas, and electricity bills and resolution proposals) between 1994 and 2003. We find a strong divide between Single-Member District (SMD) and Proportional Representation (PR) deputies High statistical significance of gas production is demonstrated throughout the three Duma terms and shows Gazprom's key position in the post-Soviet Russian economy. Oil production is variably significant in the two first Dumas, when the main legislative debates on oil privatization occur. There is no constant left–right continuum, which is consistent with the deputies' proclaimed party ideology. The pro- and anti-reform poles observed in our Poole-based single dimensional scale are not necessarily connected with liberal and state-oriented regulatory policies, respectively. Party switching is a solid indicator of Russia's polarized legislative dynamics when it comes to energy sector reform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Severe spinal deformity in young children is a formidable challenge for optimal treatment. Standard interventions for adolescents, such as spinal deformity correction and fusion, may not be appropriate for young patients with considerable growth remaining. Alternative surgical options that provide deformity correction and protect the growth remaining in the spine are needed to treat this group of patients 1, 2. One such method is the use of shape memory alloy staples. We report our experience to date using video-assisted thoracoscopic insertion of shape memory alloy staples. A retrospective review was conducted of 13 patients with scoliosis, aged 7 to 13 years, who underwent video-assisted thoracoscopic insertion of shape memory staples. In our experience, video-assisted thoracoscopic insertion of shape memory alloy staples is a safe procedure with no complications noted. It is a reliable method of providing curve stability, however the follow up results to date indicate that the effectiveness of the procedure is greater in younger patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The RatSLAM system can perform vision based SLAM using a computational model of the rodent hippocampus. When the number of pose cells used to represent space in RatSLAM is reduced, artifacts are introduced that hinder its use for goal directed navigation. This paper describes a new component for the RatSLAM system called an experience map, which provides a coherent representation for goal directed navigation. Results are presented for two sets of real world experiments, including comparison with the original goal memory system's performance in the same environment. Preliminary results are also presented demonstrating the ability of the experience map to adapt to simple short term changes in the environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim. The paper is a report of a study to demonstrate how the use of schematics can provide procedural clarity and promote rigour in the conduct of case study research. Background. Case study research is a methodologically flexible approach to research design that focuses on a particular case – whether an individual, a collective or a phenomenon of interest. It is known as the 'study of the particular' for its thorough investigation of particular, real-life situations and is gaining increased attention in nursing and social research. However, the methodological flexibility it offers can leave the novice researcher uncertain of suitable procedural steps required to ensure methodological rigour. Method. This article provides a real example of a case study research design that utilizes schematic representation drawn from a doctoral study of the integration of health promotion principles and practices into a palliative care organization. Discussion. The issues discussed are: (1) the definition and application of case study research design; (2) the application of schematics in research; (3) the procedural steps and their contribution to the maintenance of rigour; and (4) the benefits and risks of schematics in case study research. Conclusion. The inclusion of visual representations of design with accompanying explanatory text is recommended in reporting case study research methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To investigate the influence of soft contact lenses on regional variations in corneal thickness and shape while taking account of natural diurnal variations in these corneal parameters. Methods: Twelve young, healthy subjects wore 4 different types of soft contact lenses on 4 different days. The lenses were of two different materials (silicone hydrogel, hydrogel), designs (spherical, toric) and powers (–3.00, –7.00 D). Corneal thickness and topography measurements were taken before and after 8 hours of lens wear and on two days without lens wear, using the Pentacam HR system. Results: The hydrogel toric contact lens caused the greatest level of corneal thickening in the central (20.3 ± 10.0 microns) as well as peripheral cornea (24.1 ± 9.1 microns) (p < 0.001) with an obvious regional swelling of the cornea beneath the stabilizing zones. The anterior corneal surface generally showed slight flattening. All contact lenses resulted in central posterior corneal steepening and this was weakly correlated with central corneal swelling (p = 0.03) and peripheral corneal swelling (p = 0.01). Conclusions: There was an obvious regional corneal swelling apparent after wear of the hydrogel soft toric lenses, due to the location of the thicker stabilization zones of the toric lenses. However with the exception of the hydrogel toric lens, the magnitude of corneal swelling induced by the contact lenses over the 8 hours of wear was less than the natural diurnal thinning of the cornea over this same period.