998 resultados para 7140-201
Resumo:
In this article we identify how computational automation achieved through programming has enabled a new class of music technologies with generative music capabilities. These generative systems can have a degree of music making autonomy that impacts on our relationships with them; we suggest that this coincides with a shift in the music-equipment relationship from tool use to a partnership. This partnership relationship can occur when we use technologies that display qualities of agency. It raises questions about the kinds of skills and knowledge that are necessary to interact musically in such a partnership. These are qualities of musicianship we call eBility. In this paper we seek to define what eBility might consist of and how consideration of it might effect music education practice. The 'e' in eBility refers not only to the electronic nature of computing systems but also to the ethical, enabling, experiential and educational dimensions of the creative relationship with technologies with agency. We hope to initiate a discussion around differentiating what we term representational technologies from those with agency and begin to uncover the implications of these ideas for music educators in schools and communities. We hope also to elucidate the emergent theory and practice that has enabled the development of strategies for optimising this kind of eBility where the tool becomes partner. The identification of musical technologies with agency adds to the authors’ list of metaphors for technology use in music education that previously included tool, medium and instrument. We illustrate these ideas with examples and with data from our work with the jam2jam interactive music system. In this discussion we will outline our experiences with jam2jam as an example of a technology with agency and describe the aspects of eBility that interaction with it promotes.
Resumo:
Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.
Resumo:
Particulate matter (PM) emissions involve a complex mixture of solid and liquid particles suspended in a gas, where it is noted that PM emissions from diesel engines are a major contributor to the ambient air pollution problem. Whilst epidemiological studies have shown a link between increased ambient PM emissions and respiratory morbidity and mortality, studies of this design are not able to identify the PM constituents responsible for driving adverse respiratory health effects. This review explores in detail the physico-chemical properties of diesel particulate matter (DPM), and identifies the constituents of this pollution source that are responsible for the development of respiratory disease. In particular, this review shows that the DPM surface area and adsorbed organic compounds play a significant role in manifesting chemical and cellular processes that if sustained can lead to the development of adverse respiratory health effects. The mechanisms of injury involved included: inflammation, innate and acquired immunity, and oxidative stress. Understanding the mechanisms of lung injury from DPM will enhance efforts to protect at-risk individuals from the harmful respiratory effects of air pollutants.
Resumo:
When I was first invited to teach a women's studies course called Sex Trafficking in 2002, most of my students had never heard of the issue. Internet and literature searches for "trafficking" mostly turned up references to trafficking in drugs and weapons, not people. When I revised the course for a topical capstone in Criminology, Justice, and Policy Studies in 2006, all of my students had heard about human trafficking, and a handful had already studied it in other classes. The availability of books, films, scholarly articles, and advocacy pieces had all increased exponentially since I first became engaged in the field. This bounty provided a wealth of resources for teaching but also presented a greater challenge when it came to deciding which texts to include. It also added to the inevitable pedagogical angst over what to leave out. I came to know about trafficking by accident, when I was hired as a research assistant at The Protection Project (TPP) in 1999. In my time at TPP I authored a literature review on human trafficking. At that time, my comprehensive database of sources contained fewer than one hundred books and articles, a few UN documents, a handful of films, and some websites from nongovernmental organizations. My review of the literature inevitably reflected the ideological chasm between those who saw trafficking as primarily a labor, migration, and rights issue and those who saw it as primarily a sexual exploitation issue. On the policy end, these ideological orientations created bizarre bedfellows of individuals and organizations that otherwise would have been at odds. The ideological divide has not diminished in the intervening years, and it is important to be aware of and to negotiate this in designing a course on trafficking. As a feminist teacher, I was very aware of the divisions among feminists on the subject of trafficking, and was interested in communicating these differences to students who were not well versed in the varieties of feminist thought. I was also mindful of the difficulties my American students had in engaging with some of the course texts and issues the first time around. For some students, moral judgments about prostitutes were as far as they were able to go in engaging with the course. These students could not find a way in to think about the many issues involved in trafficking. How could I reach them? In this article, I share some of my texts and tactics with others who might find themselves in a position to teach about human trafficking. I include my case for why feminist teachers should teach trafficking, an overview of the debate that divides the field, my rationale for organizing the course the way that I did, issues to consider when designing a course on trafficking, and some suggested readings, films, and web resources.
Resumo:
Ultraendurance exercise training places large energy demands on athletes and causes a high turnover of vitamins through sweat losses, metabolism, and the musculoskeletal repair process. Ultraendurance athletes may not consume sufficient quantities or quality of food in their diet to meet these needs. Consequently, they may use oral vitamin and mineral supplements to maintain their health and performance. We assessed the vitamin and mineral intake of ultraendurance athletes in their regular diet, in addition to oral vitamin and mineral supplements. Thirty-seven ultraendurance triathletes (24 men and 13 women) completed a 7-day nutrition diary including a questionnaire to determine nutrition adequacy and supplement intake. Compared with dietary reference intakes for the general population, both male and female triathletes met or exceeded all except for vitamin D. In addition, female athletes consumed slightly less than the recommended daily intake for folate and potassium; however, the difference was trivial. Over 60% of the athletes reported using vitamin supplements, of which vitamin C (97.5%), vitamin E (78.3%), and multivitamins (52.2%) were the most commonly used supplements. Almost half (47.8%) the athletes who used supplements did so to prevent or reduce cold symptoms. Only 1 athlete used supplements on formal medical advice. Vitamin C and E supplementation was common in ultraendurance triathletes, despite no evidence of dietary deficiency in these 2 vitamins.
Resumo:
Background and Objectives Laser tissue repair usually relies on hemoderivate protein solders, based on serum albumin. These solders have intrinsic limitations that impair their widespread use, such as limited tensile strength of repaired tissue, poor solder solubility, and brittleness prior to laser denaturation. Furthermore, the required activation temperature of albumin solders (between 65 and 70°C) can induce significant thermal damage to tissue. In this study, we report on the design of a new polysaccharide adhesive for tissue repair that overcomes some of the shortcomings of traditional solders. Study Design/Materials and Methods Flexible and insoluble strips of chitosan adhesive (elastic modulus ~6.8 Mpa, surface area ~34 mm2, thickness ~20 µm) were bonded onto rectangular sections of sheep intestine using a diode laser (continuous mode, 120 ± 10 mW, = λ 808 nm) through a multimode optical fiber with an irradiance of ~15 W/cm2. The adhesive was based on chitosan and also included indocyanin green dye (IG). The temperature between tissue and adhesive was measured using a small thermocouple (diameter ~0.25 mm) during laser irradiation. The repaired tissue was tested for tensile strength by a calibrated tensiometer. Murine fibroblasts were cultured in extracted media from chitosan adhesive to assess cytotoxicity via cell growth inhibition in a 48 hours period. Results Chitosan adhesive successfully repaired intestine tissue, achieving a tensile strength of 14.7 ± 4.7 kPa (mean ± SD, n = 30) at a temperature of 60-65°C. Media extracted from chitosan adhesive showed negligible toxicity to fibroblast cells under the culture conditions examined here. Conclusion A novel chitosan-based adhesive has been developed, which is insoluble, flexible, and adheres firmly to tissue upon infrared laser activation.
Resumo:
Fishtown is a series of mediated animated works which embody artistic conceptions of ambience and explore the interplay between foreground and background. The series draws upon a representation of natural patterns and rhythms in the ambient environment and is produced using a hybrid style of animation process that incorporates motion capture, dynamics and keyframe animation to construct a biomemtic peripheral rhythm. The display of the work is a crucial part of the project, and contributes a considerable amount to the reception of the work. Based on the ambient conceptions defined by Cage, Eno and Bizzocchi, ambient animation should incorporate some form of ambient display. As Eno (1978) states, it should be as ignorable as it is interesting. The ultimate intention is to place the work outside the gallery setting, to provide a more neutral ambient setting for the viewing of the work, and therefore the use of an ambient display is necessary if the work is to be situated in an ambient setting. Craig Walsh is a contemporary artist producing work for large scale projections in ambient settings. Completing Walsh's masterclass in 2011 (Tanawha Arts and Ecology Centre) has been an important factor in arriving at a strategy for the display of the Fishtown series. The most recent work in the Fishtown series was developed during a residency at the Crane Arts studios in Philadelphia USA in August 2012, and is comprised of a screen based animated work, utilizing large scale digital projection. Documentation of this work can be found at the Crane Arts Residency Website: http://cranearts.qcagriffith.com/crane-arts-residency-chris-denaro
Resumo:
Discourses of public education reform, like that exemplified within the Queensland Government’s future vision document, Queensland State Education-2010 (QSE-2010), position schooling as a panacea to pervasive social instability and a means to achieve a new consensus. However, in unravelling the many conflicting statements that conjoin to form education policy and inform related literature (Ball, 1993), it becomes clear that education reform discourse is polyvalent (Foucault, 1977). Alongside visionary statements that speak of public education as a vehicle for social justice are the (re)visionary or those reflecting neoliberal individualism and a conservative politics. In this paper, it is argued that the latter coagulate to form strategic discursive practices which work to (re)secure dominant relations of power. Further, discussion of the characteristics needed by the “ideal” future citizen of Queensland reflect efforts to ‘tame change through the making of the child’ (Popkewitz, 2004, p.201). The casualties of this (re)vision and the refusal to investigate the pathologies of “traditional” schooling are the children who, for whatever reason, do not conform to the norm of the desired school child as an “ideal” citizen-in-the-making and who become relegated to alternative educational settings.
Resumo:
MesoLite, a zeolite material manufactured by NanoChem Holdings Pty Ltd is made by caustic reaction of kaolin at temperatures between 80-95°C. This material has a moderate surface area (9~12 m2/g) and very high cation exchange capacity (500meq/100g). To measure the availability of K in K-MesoLite to plants, wheat was grown with K-MesoLite or a soluble fertiliser (e.g. KCl) in non-leached pots in a glasshouse. The weights and elemental compositions of the plants were compared after four weeks growth. Plants grown with K-MesoLite were slightly larger than those grown with KCl. The elemental compositions of the plants were similar except for Si, which was significantly higher in the plants grown with K-MesoLite than in those fertilised with KCl. K from K-MesoLite is readily available to plants.
Resumo:
Rainfall has been identified as one of the main causes for embankment failures in areas where high annual rainfall is experienced. The inclination of the embankment slope is important for its stability during rainfall. In this study, instrumented model embankments were subjected to artificial rainfalls to investigate the effects of the slope inclination on their stability. The results of the study suggested that when the slope inclination is greater than the friction angle of the soil, the failure is initiated by the loss of soil suction and when it is smaller than the friction angle of the soil, the failure is initiated by the positive pore water pressure developed at the toe of the slope. Further, slopes become more susceptible to sudden collapse during rainfall as the slope angle increases.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
Background: Nurse practitioner education and practice has been guided by generic competency standards in Australia since 2006. Development of specialist competencies has been less structured and there are no formal standards to guide education and continuing professional development for specialty fields. There is limited international research and no Australian research into development of specialist nurse practitioner competencies. This pilot study aimed to test data collection methods, tools and processes in preparation for a larger national study to investigate specialist competency standards for emergency nurse practitioners. Research into specialist emergency nurse practitioner competencies has not been conducted in Australia. Methods: Mixed methods research was conducted with a sample of experienced emergency nurse practitioners. Deductive analysis of data from a focus group workshop informed development of a draft specialty competency framework. The framework was subsequently subjected to systematic scrutiny for consensus validation through a two round Delphi Study. Results: The Delphi study first round had a 100% response rate; the second round 75% response rate. The scoring for all items in both rounds was above the 80% cut off mark with the lowest mean score being 4.1 (82%) from the first round. Conclusion: The authors collaborated with emergency nurse practitioners to produce preliminary data on the formation of specialty competencies as a first step in developing an Australian framework.