906 resultados para HERBIG-HARO OBJECTS
Resumo:
This research paper explores the impact product personalisation has upon product attachment and aims to develop a deeper understanding of why, how and if consumers choose to do so. The current research in this field is mainly based on attachment theories and is predominantly product specific. This paper researches the link between product attachment and personalisation through in-depth, semi-structured interviews, where the data has been thematically analysed and broken down into three themes, and nine sub-themes. It was found that participants did become more attached to products once they were personalised and the reasons why this occurred varied. The most common reasons that led to personalisation were functionality and usability, the expression of personality through a product and the complexity of personalisation. The reasons why participants felt connected to their products included strong emotions/memories, the amount of time and effort invested into the personalisation, a sense of achievement. Reasons behind the desire for personalisation included co-designing, expression of uniqueness/individualism and having choice for personalisation. Through theme and inter-theme relationships, many correlations were formed, which created the basis for design recommendations. These recommendations demonstrate how a designer could implement the emotions and reasoning for personalisation into the design process.
Resumo:
Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.
Resumo:
Heavy Weather was a monumental sculptural work produced for the prestigious McClelland National Sculpture Survey in 2012. The work was a large cold-cast aluminium figure depicting the artist in athletic costume arching backwards across the top of massive boulder. The pose of the figure was derived from the ‘Fosbury flop’, the awkward backwards manoeuvre associated with high-jump event. The boulder was a portrait of a different kind - a remake of the Ian Fairweather memorial on Bribie Island but elongated to tower upwards. The work thus emphasised two contrasting impressions of movement – immense inertia and writhing agility. Heavy Weather sought to bring these two opposing forces together as a way of representing the tensions that shape our relationship with objects. In so doing, the work contributed to the artist’s ongoing exploration of sculpture, self-portraiture and the civic monument. The work was promoted nationally including the Art Guide and the Melbourne Review. It was also the subject of a article in the Australian Art Collector.
Resumo:
Excerpt: "To enter the shadow world cast by each installation of Moule is to enter the waking dream of the half remembered. Each object, in its own pool of light, connected to other objects by fields of the twilit, is evocative of some object from our waking world, but recast into that which cannot be and yet is here, palpably so - insistent. All that confronts us is so determinedly derived from some internal gesture and rendered into some partial-reality, without surety of line or contour to combat our internal world of meaning and sense."
Resumo:
Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs). Methods Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis. Findings Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350 000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient −0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa. Interpretation Rates of YLDs per 100 000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world. Funding Bill & Melinda Gates Foundation.
Resumo:
The SimCalc Vision and Contributions Advances in Mathematics Education 2013, pp 419-436 Modeling as a Means for Making Powerful Ideas Accessible to Children at an Early Age Richard Lesh, Lyn English, Serife Sevis, Chanda Riggs … show all 4 hide » Look Inside » Get Access Abstract In modern societies in the 21st century, significant changes have been occurring in the kinds of “mathematical thinking” that are needed outside of school. Even in the case of primary school children (grades K-2), children not only encounter situations where numbers refer to sets of discrete objects that can be counted. Numbers also are used to describe situations that involve continuous quantities (inches, feet, pounds, etc.), signed quantities, quantities that have both magnitude and direction, locations (coordinates, or ordinal quantities), transformations (actions), accumulating quantities, continually changing quantities, and other kinds of mathematical objects. Furthermore, if we ask, what kind of situations can children use numbers to describe? rather than restricting attention to situations where children should be able to calculate correctly, then this study shows that average ability children in grades K-2 are (and need to be) able to productively mathematize situations that involve far more than simple counts. Similarly, whereas nearly the entire K-16 mathematics curriculum is restricted to situations that can be mathematized using a single input-output rule going in one direction, even the lives of primary school children are filled with situations that involve several interacting actions—and which involve feedback loops, second-order effects, and issues such as maximization, minimization, or stabilizations (which, many years ago, needed to be postponed until students had been introduced to calculus). …This brief paper demonstrates that, if children’s stories are used to introduce simulations of “real life” problem solving situations, then average ability primary school children are quite capable of dealing productively with 60-minute problems that involve (a) many kinds of quantities in addition to “counts,” (b) integrated collections of concepts associated with a variety of textbook topic areas, (c) interactions among several different actors, and (d) issues such as maximization, minimization, and stabilization.
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
Objective: To describe unintentional injuries to children aged less than one year, using coded and textual information, in three-month age bands to reflect their development over the year. Methods: Data from the Queensland Injury Surveillance Unit was used. The Unit collects demographic, clinical and circumstantial details about injured persons presenting to selected emergency departments across the State. Only injuries coded as unintentional in children admitted to hospital were included for this analysis. Results: After editing, 1,082 children remained for analysis, 24 with transport-related injuries. Falls were the most common injury, but becoming proportionately less over the year, whereas burns and scalds and foreign body injuries increased. The proportion of injuries due to contact with persons or objects varied little, but poisonings were relatively more common in the first and fourth three-month periods. Descriptions indicated that family members were somehow causally involved in 16% of injuries. Our findings are in qualitative agreement with comparable previous studies. Conclusion: The pattern of injuries varies over the first year of life and is clearly linked to the child's increasing mobility. Implications: Injury patterns in the first year of life should be reported over shorter intervals. Preventive measures for young children need to be designed with their rapidly changing developmental stage in mind, using a variety of strategies, one of which could be opportunistic developmentally specific education of parents. Injuries in young children are of abiding concern given their immediate health and emotional effects, and potential for long-term adverse sequelae. In Australia, in the financial year 2006/07, 2,869 children less than 12 months of age were admitted to hospital for an unintentional injury, a rate of 10.6 per 1,000, representing a considerable economic and social burden. Given that many of these injuries are preventable, this is particularly concerning. Most epidemiologic studies analyse data in five-year age bands, so children less than five years of age are examined as a group. This study includes only those children younger than one year of age to identify injury detail lost in analyses of the larger group, as we hypothesised that the injury pattern varied with the developmental stage of the child. The authors of several North American studies have commented that in dealing with injuries in pre-school children, broad age groupings are inadequate to do justice to the rapid developmental changes in infancy and early childhood, and have in consequence analysed injuries in shorter intervals. To our knowledge, no similar analysis of Australian infant injuries has been published to date. This paper describes injury in children less than 12 months of age using data from the Queensland Injury Surveillance Unit (QISU).
Resumo:
Biomechanics involves research and analysis of the mechanisms of living organisms. This can be conducted on multiple levels and represents a continuum from the molecular, wherein biomaterials such as collagen and elastin are considered, to the tissue, organ and whole body level. Some simple applications of Newtonian mechanics can supply correct approximations on each level, but precise details demand the use of continuum mechanics. Sport biomechanics uses the scientific methods of mechanics to study the effects of forces on the sports performer and considers aspects of the behaviour of sports implements, equipment, footwear and surfaces. There are two main aims of sport biomechanics, that is, the reduction of injury and the improvement of performance (Bartlett, 1999). Aristotle (384-322 BC) wrote the first book on biomechanics, De Motu Animalium, translated as On the Movement of Animals. He saw animals' bodies as mechanical systems, but also pursued questions that might explain the physiological difference between imagining the performance of an action and actually doing it. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of animals in flight, the hydrodynamics of objects moving through water and locomotion in general across all forms of life, from individual cells to whole organisms...
Resumo:
Eleven Pro-Am curators of Australian television history were interviewed about their practice. The data helps us to understand the relationship between professional and Pro-Am approaches to Australian television history. There is no simple binary – the lines are blurred – but there are some differences. Pro-Am curators of Australian television history are not paid for their work and present other motivations for practice – particularly being that ‘weird child’ who was obsessed with gathering information and objects related to television. They have freedom to curate only programs and genres that interest them, and they tend to collect merchandise as much as program texts themselves. And they have less interest in formally cataloguing their material than do professional curators of Australian television history.
Resumo:
Laboratories and technical hands on learning have always been a part of Engineering and Science based university courses. They provide the interface where theory meets practice and students may develop professional skills through interacting with real objects in an environment that models appropriate standards and systems. Laboratories in many countries are facing challenges to their sustainable operation and effectiveness. In some countries such as Australia, significantly reduced funding and staff reduction is eroding a once strong base of technical infrastructure. Other countries such as Thailand are seeking to develop their laboratory infrastructure and are in need of staff skill development, management and staff structure in technical areas. In this paper the authors will address the need for technical development with reference to work undertaken in Thailand and Australia. The authors identify the roads which their respective university sectors are on and point out problems and opportunities. It is hoped that the cross roads where we meet will result in better directions for both.
Resumo:
Now as in earlier periods of acute change in the media environment, new disciplinary articulations are producing new methods for media and communication research. At the same time, established media and communication studies meth- ods are being recombined, reconfigured, and remediated alongside their objects of study. This special issue of JOBEM seeks to explore the conceptual, political, and practical aspects of emerging methods for digital media research. It does so at the conjuncture of a number of important contemporary trends: the rise of a ‘‘third wave’’ of the Digital Humanities and the ‘‘computational turn’’ (Berry, 2011) associated with natively digital objects and the methods for studying them; the apparently ubiquitous Big Data paradigm—with its various manifestations across academia, business, and government — that brings with it a rapidly increasing interest in social media communication and online ‘‘behavior’’ from the ‘‘hard’’ sciences; along with the multisited, embodied, and emplaced nature of everyday digital media practice.
Resumo:
This paper presents a comparative study to evaluate the usability of a tag-based interface alongside the present 'conventional' interface in the Australian mobile banking context. The tag-based interface is based on user-assigned tags to banking resources with support for different types of customization. And the conventional interface is based on standard HTML objects such as select boxes, lists, tables and etc, with limited customization. A total of 20 banking users evaluated both interfaces based on a set of tasks and completed a post-test usability questionnaire. Efficiency, effectiveness, and user satisfaction were considered to evaluate the usability of the interfaces. Results of the evaluation show improved usability in terms of user satisfaction with the tag-based interface compared to the conventional interface. This outcome is more apparent among participants without prior mobile banking experience. Therefore, there is a potential for the tag-based interface to improve user satisfaction of mobile banking and also positively affect the adoption and acceptance of mobile banking, particularly in Australia.
Resumo:
This dissertation analyses how physical objects are translated into digital artworks using techniques which can lead to ‘imperfections’ in the resulting digital artwork that are typically removed to arrive at a ‘perfect’ final representation. The dissertation discusses the adaptation of existing techniques into an artistic workflow that acknowledges and incorporates the imperfections of translation into the final pieces. It presents an exploration of the relationship between physical and digital artefacts and the processes used to move between the two. The work explores the 'craft' of digital sculpting and the technology used in producing what the artist terms ‘a naturally imperfect form’, incorporating knowledge of traditional sculpture, an understanding of anatomy and an interest in the study of bones (Osteology). The outcomes of the research are presented as a series of digital sculptural works, exhibited as a collection of curiosities in multiple mediums, including interactive game spaces, augmented reality (AR), rapid prototype prints (RP) and video displays.