329 resultados para Body image concerns
Resumo:
Despite the global financial downturn, the Australian rail industry is in a period of expansion. Reports indicate that the industry is not attracting sufficient entry level and mid-career engineers and skilled technicians from within the Australian labour market and is facing widespread retirements from an ageing workforce. This paper reports on a completed qualitative study that explores the perceptions of engineering students, their lecturers, careers advisors and recruitment consultants regarding rail as a brand and of careers in the rail industry. Findings are presented about career knowledge, job characteristic preferences, branding and image and indicate that rail as a brand has a dated image, that young people and their influencers have little knowledge of rail careers and that rail could better focus its image and recruitment strategies. Conclusions include suggestions for more effective attraction and image strategies for the industry and for further research.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.
Resumo:
In this paper, a method has been developed for estimating pitch angle, roll angle and aircraft body rates based on horizon detection and temporal tracking using a forward-looking camera, without assistance from other sensors. Using an image processing front-end, we select several lines in an image that may or may not correspond to the true horizon. The optical flow at each candidate line is calculated, which may be used to measure the body rates of the aircraft. Using an Extended Kalman Filter (EKF), the aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and the location of the horizon. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To test the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42◦ and 0.71◦ respectively when compared with a truth attitude source. The Cessna flight resulted in pitch and roll error standard deviations of 1.79◦ and 1.75◦ respectively. The benefits of selecting and tracking the horizon using a motion model and optical flow rather than naively relying on the image processing front-end is also demonstrated.
Resumo:
The main focus of this paper is the motion planning problem for a deeply submerged rigid body. The equations of motion are formulated and presented by use of the framework of differential geometry and these equations incorporate external dissipative and restoring forces. We consider a kinematic reduction of the affine connection control system for the rigid body submerged in an ideal fluid, and present an extension of this reduction to the forced affine connection control system for the rigid body submerged in a viscous fluid. The motion planning strategy is based on kinematic motions; the integral curves of rank one kinematic reductions. This method is of particular interest to autonomous underwater vehicles which can not directly control all six degrees of freedom (such as torpedo shaped AUVs) or in case of actuator failure (i.e., under-actuated scenario). A practical example is included to illustrate our technique.
Decoupled trajectory planning for a submerged rigid body subject to dissipative and potential forces
Resumo:
This paper studies the practical but challenging problem of motion planning for a deeply submerged rigid body. Here, we formulate the dynamic equations of motion of a submerged rigid body under the architecture of differential geometric mechanics and include external dissipative and potential forces. The mechanical system is represented as a forced affine-connection control system on the configuration space SE(3). Solutions to the motion planning problem are computed by concatenating and reparameterizing the integral curves of decoupling vector fields. We provide an extension to this inverse kinematic method to compensate for external potential forces caused by buoyancy and gravity. We present a mission scenario and implement the theoretically computed control strategy onto a test-bed autonomous underwater vehicle. This scenario emphasizes the use of this motion planning technique in the under-actuated situation; the vehicle loses direct control on one or more degrees of freedom. We include experimental results to illustrate our technique and validate our method.
Resumo:
In this paper we analyze the equations of motion of a submerged rigid body. Our motivation is based on recent developments done in trajectory design for this problem. Our goal is to relate some properties of singular extremals to the existence of decoupling vector fields. The ideas displayed in this paper can be viewed as a starting point to a geometric formulation of the trajectory design problem for mechanical systems with potential and external forces.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.
Resumo:
Pedestrian movement is known to cause significant effects on indoor MIMO channels. In this paper, a statistical characterization of the indoor MIMO-OFDM channel subject ot pedestrian movement is reported. The experiment used 4 sending and 4 receiving antennas and 114 sub-carriers at 5.2 GHz. Measurement scenarios varied from zero to ten pedestrians walking randomly between transmitter (tx) and receiver (Rx) arrays. The empirical cumulative distribution function (CDF) of the received fading envelope fits the Ricean distribution with K factors ranging from 7dB to 15 dB, for the 10 pedestrians and vacant scenarios respectively. In general, as the number of pedestrians increase, the CDF slope tends to decrease proportionally. Furthermore, as the number of pedestrians increase, increasing multipath contribution, the dynamic range of channel capacity increases proportionally. These results are consistent with measurement results obtained in controlled scenarios for a fixed narrowband Single-Input Single-Output (SISO) link at 5.2 GHz in previous work. The described empirical characterization provides an insight into the prediction of human-body shadowing effects for indoor MIMO-OFDM channels at 5.2 GHz.
Resumo:
Cell based therapies as they apply to tissue engineering and regenerative medicine, require cells capable of self renewal and differentiation, and a prerequisite is to be able to prepare an effective dose of ex vivo expanded cells for autologous transplants. The in vivo identification of a source of physiologically relevant cell types suitable for cell therapies therefore figures as an integral part of tissue engineering. Stem cells serve as a reserve for biological repair, having the potential to differentiate into a number of specialised cell types within the body; they therefore represent the most useful candidates for cell based therapies. The primary goal of stem cell research is to produce cells that are both patient specific, as well as having properties suitable for the specific conditions for which they are intended to remedy. From a purely scientific perspective, stem cells allow scientists to gain a deeper understanding of developmental biology and regenerative therapies. Stem cells have acquired a number of uses for applications in regenerative medicine, immunotherapy, gene therapy, but it is in the area of tissue engineering that they generate most excitement, primarily as a result of their capacity for self-renewal and pluripotency. A unique feature of stem cells is their ability to maintain an uncommitted quiescent state in vivo and then, once triggered by conditions such as disease, injury or natural wear or tear, serve as a reservoir and natural support system to replenish lost cells. Although these cells retain the plasticity to differentiate into various tissues, being able to control this differentiation process is still one of the biggest challenges facing stem cell research. In an effort to harness the potential of these cells a number of studies have been conducted using both embryonic/foetal and adult stem cells. The use of embryonic stem cells (ESC) have been hampered by strong ethical and political concerns, this despite their perceived versatility due to their pluripotency. Ethical issues aside, other concerns raised with ESCs relates to the possibility of tumorigenesis, immune rejection and complications with immunosuppressive therapies, all of which adds layers of complications to the application ESC in research and which has led to the search for alternative sources for stem cells. The adult tissues in higher organisms harbours cells, termed adult stem cells, and these cells are reminiscent of unprogrammed stem cells. A number of sources of adult stem cells have been described. Bone marrow is by far the most accessible source of two potent populations of adult stem cells, namely haematopoietic stem cells (HSCs) and bone marrow mesenchymal stem cells (BMSCs). Autologously harvested adult stem cells can, in contrast to embryonic stem cells, readily be used in autografts, since immune rejection is not an issue; and their use in scientific research has not attracted the ethical concerns which have been the case with embryonic stem cells. The major limitation to their use, however, is the fact that adult stem cells are exceedingly rare in most tissues. This fact makes identifying and isolating these cells problematic; bone marrow being perhaps the only notable exception. Unlike the case of HSCs, there are as yet no rigorous criteria for characterizing MSCs. Changing acuity about the pluripotency of MSCs in recent studies has expanded their potential application; however, the underlying molecular pathways which impart the features distinctive to MSCs remain elusive. Furthermore, the sparse in vivo distribution of these cells imposes a clear limitation to their study in vitro. Also, when MSCs are cultured in vitro, there is a loss of the in vivo microenvironment, resulting in a progressive decline in proliferation potential and multipotentiality. This is further exacerbated with increased passage numbers in culture, characterized by the onset of senescence related changes. As a consequence, it is necessary to establish protocols for generating large numbers of MSCs but without affecting their differentiation potential. MSCs are capable of differentiating into mesenchymal tissue lineages, including bone, cartilage, fat, tendon, muscle, and marrow stroma. Recent findings indicate that adult bone marrow may also contain cells that can differentiate into the mature, nonhematopoietic cells of a number of tissues, including cells of the liver, kidney, lung, skin, gastrointestinal tract, and myocytes of heart and skeletal muscle. MSCs can readily be expanded in vitro and can be genetically modified by viral vectors and be induced to differentiate into specific cell lineages by changing the microenvironment–properties which makes these cells ideal vehicles for cellular gene therapy. MSCs can also exert profound immunosuppressive effects via modulation of both cellular and innate immune pathways, and this property allows them to overcome the issue of immune rejection. Despite the many attractive features associated with MSCs, there are still many hurdles to overcome before these cells are readily available for use in clinical applications. The main concern relates to in vivo characterization and identification of MSCs. The lack of a universal biomarker, sparse in vivo distribution, and a steady age related decline in their numbers, makes it an obvious need to decipher the reprogramming pathways and critical molecular players which govern the characteristics unique to MSCs. This book presents a comprehensive insight into the biology of adult stem cells and their utility in current regeneration therapies. The adult stem cell populations reviewed in this book include bone marrow derived MSCs, adipose derived stem cells (ASCs), umbilical cord blood stem cells, and placental stem cells. The features such as MSC circulation and trafficking, neuroprotective properties, and the nurturing roles and differentiation potential of multiple lineages have been discussed in details. In terms of therapeutic applications, the strengths of MSCs have been presented and their roles in disease treatments such as osteoarthritis, Huntington’s disease, periodontal regeneration, and pancreatic islet transplantation have been discussed. An analysis comparing osteoblast differentiation of umbilical cord blood stem cells and MSCs has been reviewed, as has a comparison of human placental stem cells and ASCs, in terms of isolation, identification and therapeutic applications of ASC in bone, cartilage regeneration, as well as myocardial regeneration. It is my sincere hope that this book will update the reader as to the research progress of MSC biology and potential use of these cells in clinical applications. It will be the best reward to all contributors of this book, if their efforts herein may in some way help the readers in any part of their study, research, and career development.
Resumo:
Script for non-verbal performance. ----- ----- ----- Research Component: Silent Treatment: Creating Non-verbal Performance Works for Children ----- ----- ----- The research field of theatre for young people draws on theories of child development and popular culture. SHOW explored personal and social development, friendship and creative play through the lens of the experience of girls aged 8-12. This project consolidated and refined innovative approaches to creating non-verbal theatre performance, and addressed challenges inherent in the creation of a performance by adults for young audiences. A significant finding of the project was the unanticipated convergence of creative practice and research into child behaviour and development: the congruence of content (Female bullying) and theatrical form (non-verbal performance: “Within the hidden culture of aggression, girls fight with body language and relationships instead of fists and knives. In this world, friendship is a weapon, and the sting of a shout pales in comparison to a day of someone’s silence. There is no gesture more devastating than the back turning away Simmons, Rachel (2002:3) Odd Girl Out: The Hidden Culture Of Aggression In Girls Schwartz Books The creative development and drafting process focussed on negotiating the conceptual design and practical constraints of incorporating diegetic music and video sources into the narrative. The authorial (and production) challenges of creating a script that could facilitate the re-mount a non-verbal work for a company specialising in text-based theatre . ----- ----- ----- Show was commissioned by the Queensland Theatre Company in 2003, toured into Queensland Schools by the Queensland Arts Council and in 2004 was performed at the Sydney Opera House.
Resumo:
There has recently been an emphasis within literacy studies on both the spatial dimensions of social practices (Leander & Sheehy, 2004) and the importance of incorporating design and multiple modes of meaning-making into contemporary understandings of literacy (Cope & Kalantzis, 2000; New London Group, 1996). Kress (2003) in particular has outlined the potential implications of the cultural shift from the dominance of writing, based on a logic of time and sequence in time, to the dominance of the mode of the image, based on a logic of space. However, the widespread re-design of curriculum and pedagogy by classroom teachers to allow students to capitalise on the various affordances of different modes of meaning-making – including the spatial – remains in an emergent stage. We report on a project in which university researchers’ expertise in architecture, literacy and communications enabled two teachers in one school to expand the forms of literacy that primary school children engaged in. Starting from the school community’s concerns about an urban renewal project in their neighbourhood, we worked together to develop a curriculum of spatial literacies with real-world goals and outcomes.
Resumo:
Aurora, an illustrated novella, is a retelling of the classic fairytale Sleeping Beauty, set on the Australian coast around the grounds of the family lighthouse. Instead of following in the footsteps of tradition, this tale focuses on the long time Aurora is cursed to sleep by the malevolent Minerva; we follow Aurora as she voyages into the unconscious. Hunted by Minerva through the shifting landscape of her dreams, Aurora is dogged by a nagging pull towards the light—there is something she has left behind. Eventually, realising she must face Minerva to break the curse, they stage a battle of the minds in which Aurora triumphs, having grasped the power of her thoughts, her words. Aurora, an Australian fairytale, is a story of self-empowerment, the ability to shape destiny and the power of the mind. The exegesis examines a two-pronged question: is the illustrated book for young adults—graphic novel—relevant to a contemporary readership, and, is the graphic novel, where text and image intersect, a suitably specular genre in which to explore the unconscious? It establishes the language of the unconscious and the meaning of the term ‘graphic novel’, before investigating the place of the illustrated book for an older readership in a contemporary market, particularly exploring visual literacy and the way text and image—a hybrid narrative—work together. It then studies the aptitude of graphic literature to representing the unconscious and looks at two pioneers of the form: Audrey Niffenegger, specifically her visual novel The Three Incestuous Sisters, and Shaun Tan, and his graphic novel The Arrival. Finally, it reflects upon the creative work, Aurora, in light of three concerns: how best to develop a narrative able to relay the dreaming story; how to bestow a certain ‘Australianess’ upon the text and images; and the dilemma of designing an illustrated book for an older readership.