702 resultados para BODY IMAGE
Resumo:
This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform.
Resumo:
The purpose of this study is to contribute to the cross-disciplinary body of literature of identity and organisational culture. This study empirically investigated the Hatch and Schultz (2002) Organisational Identity Dynamics (OID) model to look at linkages between identity, image, and organisational culture. This study used processes defined in the OID model as a theoretical frame by which to understand the relationships between actual and espoused identity manifestations across visual identity, corporate identity, and organisational identity. The linking processes of impressing, mirroring, reflecting, and expressing were discussed at three unique levels in the organisation. The overarching research question of How does the organisational identity dynamics process manifest itself in practice at different levels within an organisation? was used as a means of providing empirical understanding to the previously theoretical OID model. Case study analysis was utilised to provide exploratory data across the organisational groups of: Level A - Senior Marketing and Corporate Communications Management, Level B - Marketing and Corporate Communications Staff, and Level C - Non-Marketing Managers and Employees. Data was collected via 15 in-depth interviews with documentary analysis used as a supporting mechanism to provide triangulation in analysis. Data was analysed against the impressing, mirroring, reflecting, and expressing constructs with specific criteria developed from literature to provide a detailed analysis of each process. Conclusions revealed marked differences in the ways in which OID processes occurred across different levels with implications for the ways in which VI, CI, and OI interact to develop holistic identity across organisational levels. Implications for theory detail the need to understand and utilise cultural understanding in identity programs as well as the value in developing identity communications which represent an actual rather than an espoused position.
Resumo:
With an increasing body of literature linking the human resource management and marketing fields, one area receiving increased academic attention is how an organisation’s corporate reputation can be managed to attract potential recruits and shape their employment expectations through their psychological contracts. This paper seeks to enhance current models which focus on the interrelationship of corporate reputation and psychological contract theory. It is argued that a number of factors need to be considered in order the build a firmer foundation for such a theory. Firstly, a common understanding of the psychological contract needs to be established such that the focus on either expectations or promises is clarified. Secondly, the included components of the psychological contract need to be considered in light of their empirical founding and their relationship with one another. Thirdly, the interrelationship of corporate reputation, employer branding, identity and image needs to be explicated within the context of how they both influence and interrelate with the psychological contract. The final consideration surrounds the opportunity for potential employees to be considered within the corporate reputation literature as a significant stakeholder group.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
Advances in digital technology have caused a radical shift in moving image culture. This has occurred in both modes of production and sites of exhibition, resulting in a blurring of boundaries that previously defined a range of creative disciplines. Re-Imagining Animation: The Changing Face of the Moving Image, by Paul Wells and Johnny Hardstaff, argues that as a result of these blurred disciplinary boundaries, the term “animation” has become a “catch all” for describing any form of manipulated moving image practice. Understanding animation predicates the need to (re)define the medium within contemporary moving image culture. Via a series of case studies, the book engages with a range of moving image works, interrogating “how the many and varied approaches to making film, graphics, visual artefacts, multimedia and other intimations of motion pictures can now be delineated and understood” (p. 7). The structure and clarity of content make this book ideally suited to any serious study of contemporary animation which accepts animation as a truly interdisciplinary medium.
Resumo:
Despite the global financial downturn, the Australian rail industry is in a period of expansion. Reports indicate that the industry is not attracting sufficient entry level and mid-career engineers and skilled technicians from within the Australian labour market and is facing widespread retirements from an ageing workforce. This paper reports on a completed qualitative study that explores the perceptions of engineering students, their lecturers, careers advisors and recruitment consultants regarding rail as a brand and of careers in the rail industry. Findings are presented about career knowledge, job characteristic preferences, branding and image and indicate that rail as a brand has a dated image, that young people and their influencers have little knowledge of rail careers and that rail could better focus its image and recruitment strategies. Conclusions include suggestions for more effective attraction and image strategies for the industry and for further research.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.
Resumo:
In this paper, a method has been developed for estimating pitch angle, roll angle and aircraft body rates based on horizon detection and temporal tracking using a forward-looking camera, without assistance from other sensors. Using an image processing front-end, we select several lines in an image that may or may not correspond to the true horizon. The optical flow at each candidate line is calculated, which may be used to measure the body rates of the aircraft. Using an Extended Kalman Filter (EKF), the aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and the location of the horizon. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To test the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42◦ and 0.71◦ respectively when compared with a truth attitude source. The Cessna flight resulted in pitch and roll error standard deviations of 1.79◦ and 1.75◦ respectively. The benefits of selecting and tracking the horizon using a motion model and optical flow rather than naively relying on the image processing front-end is also demonstrated.
Resumo:
The main focus of this paper is the motion planning problem for a deeply submerged rigid body. The equations of motion are formulated and presented by use of the framework of differential geometry and these equations incorporate external dissipative and restoring forces. We consider a kinematic reduction of the affine connection control system for the rigid body submerged in an ideal fluid, and present an extension of this reduction to the forced affine connection control system for the rigid body submerged in a viscous fluid. The motion planning strategy is based on kinematic motions; the integral curves of rank one kinematic reductions. This method is of particular interest to autonomous underwater vehicles which can not directly control all six degrees of freedom (such as torpedo shaped AUVs) or in case of actuator failure (i.e., under-actuated scenario). A practical example is included to illustrate our technique.
Decoupled trajectory planning for a submerged rigid body subject to dissipative and potential forces
Resumo:
This paper studies the practical but challenging problem of motion planning for a deeply submerged rigid body. Here, we formulate the dynamic equations of motion of a submerged rigid body under the architecture of differential geometric mechanics and include external dissipative and potential forces. The mechanical system is represented as a forced affine-connection control system on the configuration space SE(3). Solutions to the motion planning problem are computed by concatenating and reparameterizing the integral curves of decoupling vector fields. We provide an extension to this inverse kinematic method to compensate for external potential forces caused by buoyancy and gravity. We present a mission scenario and implement the theoretically computed control strategy onto a test-bed autonomous underwater vehicle. This scenario emphasizes the use of this motion planning technique in the under-actuated situation; the vehicle loses direct control on one or more degrees of freedom. We include experimental results to illustrate our technique and validate our method.
Resumo:
In this paper we analyze the equations of motion of a submerged rigid body. Our motivation is based on recent developments done in trajectory design for this problem. Our goal is to relate some properties of singular extremals to the existence of decoupling vector fields. The ideas displayed in this paper can be viewed as a starting point to a geometric formulation of the trajectory design problem for mechanical systems with potential and external forces.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.