822 resultados para enlarged thought-storytelling


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The processes of digitization and deregulation have transformed the production, distribution and consumption of information and entertainment media over the past three decades. Today, researchers are confronted with profoundly different landscapes of domestic and personal media than the pioneers of qualitative audience research that came to form much of the conceptual basis of Cultural Studies first in Britain and North America and subsequently across all global regions. The process of media convergence, as a consequence of the dual forces of digitisation and deregulation, thus constitutes a central concept in the analysis of popular mass media. From the study of the internationalisation and globalisation of media content, changing regimes of media production, via the social shaping and communication technologies and conversely the impact of communication technology on social, cultural and political realities, to the emergence of transmedia storytelling, the interplay of intertextuality and genre and the formation of mediated social networks, convergence informs and shapes contemporary conceptual debates in the field of popular communication and beyond. However, media convergence challenges not only the conceptual canon of (popular) communication research, but poses profound methodological challenges. As boundaries between producers and consumers are increasingly fluent, formerly stable fields and categories of research such as industries, texts and audiences intersect and overlap, requiring combined and new research strategies. This preconference aims to offer a forum to present and discuss methodological innovations in the study of contemporary media and the analysis of the social, cultural,and political impact and challenges arising through media convergence. The preconference thus aims to focus on the following methodological questions and challenges: *New strategies of audience research responding to the increasing individualisation of popular media consumption. *Methods of data triangulation in and through the integrated study of media production, distribution and consumption. *Bridging the methodological and often associated conceptual gap between qualitative and quantitative research in the study of popular media. *The future of ethnographic audience and production research in light of blurring boundaries between media producers and consumers. *A critical re-examination of which textual configurations can be meaningfully described and studied as text. *Methodological innovations aimed at assessing the macro social, cultural and political impact of mediatization (including, but not limited to, "creative methods"). *Methodological responses to the globalisation of popular media and practicalities of international and transnational comparative research. *An exploration of new methods required in the study of media flow and intertextuality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently published studies not only demonstrated that laser printers are often significant sources of ultrafine particles, but they also shed light on particle formation mechanisms. While the role of fuser roller temperature as a factor affecting particle formation rate has been postulated, its impact has never been quantified. To address this gap in knowledge, this study measured emissions from 30 laser printers in chamber using a standardized printing sequence, as well as monitoring fuser roller temperature. Based on a simplified mass balance equation, the average emission rates of particle number, PM2.5 and O3 were calculated. The results showed that: almost all printers were found to be high particle number emitters (i.e. > 1.01×1010 particles/min); colour printing generated more PM2.5 than monochrome printing; and all printers generated significant amounts of O3. Particle number emissions varied significantly during printing and followed the cycle of fuser roller temperature variation, which points to temperature being the strongest factor controlling emissions. For two sub-groups of printers using the same technology (heating lamps), systematic positive correlations, in the form of a power law, were found between average particle number emission rate and average roller temperature. Other factors, such as fuser material and structure, are also thought to play a role, since no such correlation was found for the remaining two sub-groups of printers using heating lamps, or for the printers using heating strips. In addition, O3 and total PM2.5 were not found to be statistically correlated with fuser temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compressed natural gas (CNG) engines are thought to be less harmful to the environment than conventional diesel engines, especially in terms of particle emissions. Although, this is true with respect to particulate matter (PM) emissions, results of particle number (PN) emission comparisons have been inconclusive. In this study, results of on-road and dynamometer studies of buses were used to derive several important conclusions. We show that, although PN emissions from CNG buses are significantly lower than from diesel buses at low engine power, they become comparable at high power. For diesel buses, PN emissions are not significantly different between acceleration and operation at steady maximum power. However, the corresponding PN emissions from CNG buses when accelerating are an order of magnitude greater than when operating at steady maximum power. During acceleration under heavy load, PN emissions from CNG buses are an order of magnitude higher than from diesel buses. The particles emitted from CNG buses are too small to contribute to PM10 emissions or contribute to a reduction of visibility, and may consist of semivolatile nanoparticles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tamborine Mt area is a popular residential and tourist area in the Gold Coast hinterland, SE Qld. The 15km2 area occurs on elevated remnant Tertiary Basalts of the Beechmont Group, which comprise a number of mappable flow units originally derived from the Tweed volcanic centre to the south. The older Albert Basalt (Tertiary), which underlies the Beechmont Basalt at the southern end of the investigation area, is thought to be derived from the Focal Peak volcanic centre to the south west. The Basalts contain a locally significant ‘un-declared’ groundwater resource, which is utilised by the Tamborine Mt community for: • domestic purposes to supplement rainwater tank supplies, • commercial scale horticulture and • commercial export off-Mountain for bottled water. There is no reticulated water supply, and all waste water is treated on-site through domestic scale WTPs. Rainforest and other riparian ecosystems that attract residents and tourist dollars to the area, are also reliant on the groundwater that discharges to springs and surface streams on and around the plateau. Issues regarding a lack of compiled groundwater information, groundwater contamination, and groundwater sustainability are being investigated by QUT, utilising funding provided by the Federal Government’s ‘Caring for our Country’ programme through SEQ Catchments Ltd. The objectives of the two year project, which started in April 2009, are to: • Characterise the nature and condition of groundwater / surface water systems in the Tamborine Mountain area in terms of the issues being raised; • Engage and build capacity within the community to source local knowledge, encourage participation, raise awareness and improve understanding of the impacts of land and water use; • Develop a stand-alone 3D Visualisation model for dissemination into the community and use as a communication tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structures of bis(guanidinium)rac-trans-cyclohexane-1,2-dicarboxylate, 2(CH6N3+) C8H10O4- (I), guanidinium 3-carboxybenzoate monohydrate CH6N3+ C8H5O4- . H2O (II) and bis(guanidinium) benzene-1,4-dicarboxylate trihydrate, 2(CH6N3+) C8H4O4^2- . 3H2O (III) have been determined and the hydrogen bonding in each examined. All three compounds form three-dimensional hydrogen-bonded framework structures. In anhydrous (I), both guanidinium cations give classic cyclic R2/2(8) N--H...O,O'(carboxyl) and asymmetric cyclic R1/2(6) hydrogen-bonding interactions while one cation gives an unusual enlarged cyclic interaction with O acceptors of separate ortho-related carboxyl groups [graph set R2/2(11)]. Cations and anions also associate across inversion centres giving cyclic R2/4(8) motifs. In the 1:1 guanidinium salt (II), the cation gives two separate cyclic R1/2(6) interactions, one with a carboxyl O-acceptor, the other with the water molecule of solvation. The structure is unusual in that both carboxyl groups give short inter-anion O...H...O contacts, one across a crystallographic inversion centre [2.483(2)\%A], the other about a two-fold axis of rotation [2.462(2)\%A] with a half-occupancy hydrogen delocalized on the symmetry element in each. The water molecule links the cation--anion ribbon structures into a three-dimensional framework. In (III), the repeating molecular unit comprises a benzene-1,4-dicarboxylate dianion which lies across a crystallographic inversion centre, two guanidinium cations and two water molecules of solvation (each set related by two-fold rotational symmetry), and a single water molecule which lies on a two-fold axis. Each guanidinium cation gives three types of cyclic interactions with the dianions: one R^1^~2~(6), the others R2/3(8) and R3/3(10) (both of these involving the water molecules), giving a three-dimensional structure through bridges down the b cell direction. The water molecule at the general site also forms an unusual cyclic R2/2(4) homodimeric association across an inversion centre [O--H...O, 2.875(2)\%A]. The work described here provides further examples of the common cyclic guanidinium cation...carboxylate anion hydrogen-bonding associations as well as featuring other less common cyclic motifs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal mentoring programs are accepted as a valuable strategy for developing young and emerging artists. This thesis presents the results of an evaluation of the SPARK National Young Artists Mentoring Program (SPARK). SPARK was a ten-month formal mentoring program managed by Youth Arts Queensland (YAQ) on behalf of the Australia Council for the Arts from 2003-2009. The program aimed to assist young and emerging Australian artists between the ages of 18-26 to establish a professional career in the arts. It was a highly successful formal arts mentoring program that facilitated 58 mentorships between young and emerging artists and professional artists from across Australia in five program rounds over its seven year lifespan. Interest from other cultural organisations looking to develop their own formal mentoring programs encouraged YAQ to commission this research to determine how the program works to achieve its effects. This study was conducted with young and emerging artists who participated in SPARK from 2003 to 2008. It took a theory-driven evaluation approach to examine SPARK as an example of what makes formal arts mentoring programs effective. It focused on understanding the program’s theory or how the program worked to achieve its desired outcomes. The program activities and assumed responses to program activities were mapped out in a theories of change model. This theoretical framework was then used to plan the points for data collection. Through the process of data collection, actual program developments were compared to the theoretical framework to see what occurred as expected and what did not. The findings were then generalised for knowledge and wider application. The findings demonstrated that SPARK was a successful and effective program and an exemplar model of a formal mentoring program preparing young and emerging artists for professional careers in the arts. They also indicate several ways in which this already strong program could be further improved, including: looking at the way mentoring relationships are set up and how the mentoring process is managed; considering the balance between artistic and professional development; developing career development competencies and networking skills; taking into account the needs of young and emerging artists to develop their professional identity and build confidence; and giving more thought to the desired program outcomes and considering the issue of timeliness and readiness for career transition. From these findings, together with principles outlined in the mentoring and career development literature, a number of necessary conditions have been identified for developing effective mentoring programs in the career development of young and emerging artists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human knee acts as a sophisticated shock absorber during landing movements. The ability of the knee to perform this function in the real world is remarkable given that the context of the landing movement may vary widely between performances. For this reason, humans must be capable of rapidly adjusting the mechanical properties of the knee under impact load in order to satisfy many competing demands. However, the processes involved in regulating these properties in response to changing constraints remain poorly understood. In particular, the effects of muscle fatigue on knee function during step landing are yet to be fully explored. Fatigue of the knee muscles is significant for 2 reasons. First, it is thought to have detrimental effects on the ability of the knee to act as a shock absorber and is considered a risk factor for knee injury. Second, fatigue of knee muscles provides a unique opportunity to examine the mechanisms by which healthy individuals alter knee function. A review of the literature revealed that the effect of fatigue on knee function during landing has been assessed by comparing pre and postfatigue measurements, with fatigue induced by a voluntary exercise protocol. The information is limited by inconsistent results with key measures, such as knee stiffness, showing varying results following fatigue, including increased stiffness, decreased stiffness or failure to detect any change in some experiments. Further consideration of the literature questions the validity of the models used to induce and measure fatigue, as well as the pre-post study design, which may explain the lack of consensus in the results. These limitations cast doubt on the usefulness of the available information and identify a need to investigate alternative approaches. Based on the results of this review, the aims of this thesis were to: • evaluate the methodological procedures used in validation of a fatigue model • investigate the adaptation and regulation of post-impact knee mechanics during repeated step landings • use this new information to test the effects of fatigue on knee function during a step-landing task. To address the aims of the thesis, 3 related experiments were conducted that collected kinetic, kinematic and electromyographic data from 3 separate samples of healthy male participants. The methodologies involved optoelectronic motion capture (VICON), isokinetic dynamometry (System3 Pro, BIODEX) and wireless surface electromyography (Zerowire, Aurion, Italy). Fatigue indicators and knee function measures used in each experiment were derived from the data. Study 1 compared the validity and reliability of repetitive stepping and isokinetic contractions with respect to fatigue of the quadriceps and hamstrings. Fifteen participants performed 50 repetitions of each exercise twice in randomised order, over 4 sessions. Sessions were separated by a minimum of 1 week’s rest, to ensure full recovery. Validity and reliability depended on a complex interaction between the exercise protocol, the fatigue indicator, the individual and the muscle of interest. Nevertheless, differences between exercise protocols indicated that stepping was less effective in eliciting valid and reliable changes in peak power and spectral compression, compared with isokinetic exercise. A key finding was that fatigue progressed in a biphasic pattern during both exercises. The point separating the 2 phases, known as the transition point, demonstrated superior between-test reliability during the isokinetic protocol, compared with stepping. However, a correction factor should be used to accurately apply this technique to the study of fatigue during landing. Study 2 examined alterations in knee function during repeated landings, with a different sample (N =12) performing 60 consecutive step landing trials. Each landing trial was separated by 1-minute rest periods. The results provided new information in relation to the pre-post study design in the context of detecting adjustments in knee function during landing. First, participants significantly increased or decreased pre-impact muscle activity or post-impact mechanics despite environmental and task constraints remaining unchanged. This is the 1st study to demonstrate this effect in healthy individuals without external feedback on performance. Second, single-subject analysis was more effective in detecting alterations in knee function compared to group-level analysis. Finally, repeated landing trials did not reduce inter-trial variability of knee function in some participants, contrary to assumptions underpinning previous studies. The results of studies 1 and 2 were used to modify the design of Study 3 relative to previous research. These alterations included a modified isokinetic fatigue protocol, multiple pre-fatigue measurements and singlesubject analysis to detect fatigue-related changes in knee function. The study design incorporated new analytical approaches to investigate fatiguerelated alterations in knee function during landing. Participants (N = 16) were measured during multiple pre-fatigue baseline trial blocks prior to the fatigue model. A final block of landing trials was recorded once the participant met the operational fatigue definition that was identified in Study 1. The analysis revealed that the effects of fatigue in this context are heavily dependent on the compensatory response of the individual. A continuum of responses was observed within the sample for each knee function measure. Overall, preimpact preparation and post-impact mechanics of the knee were altered with highly individualised patterns. Moreover, participants used a range of active or passive pre-impact strategies to adapt post-impact mechanics in response to quadriceps fatigue. The unique patterns identified in the data represented an optimisation of knee function based on priorities of the individual. The findings of these studies explain the lack of consensus within the literature regarding the effects of fatigue on knee function during landing. First, functional fatigue protocols lack validity in inducing fatigue-related changes in mechanical output and spectral compression of surface electromyography (sEMG) signals, compared with isokinetic exercise. Second, fatigue-related changes in knee function during landing are confounded by inter-individual variation, which limits the sensitivity of group-level analysis. By addressing these limitations, the 3rd study demonstrated the efficacies of new experimental and analytical approaches to observe fatigue-related alterations in knee function during landing. Consequently, this thesis provides new perspectives into the effects of fatigue in knee function during landing. In conclusion: • The effects of fatigue on knee function during landing depend on the response of the individual, with considerable variation present between study participants, despite similar physical characteristics. • In healthy males, adaptation of pre-impact muscle activity and postimpact knee mechanics is unique to the individual and reflects their own optimisation of demands such as energy expenditure, joint stability, sensory information and loading of knee structures. • The results of these studies should guide future exploration of adaptations in knee function to fatigue. However, research in this area should continue with reduced emphasis on the directional response of the population and a greater focus on individual adaptations of knee function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the scope to develop an approach to the spatial dimensions of media and culture that is informed by cultural-economic geography. I refer to cultural-economic geography as that strand of research in the field of geography that has been informed on the one hand by the ‘cultural turn’ in both geographical and economic thought, and which focuses on the relationship between, space, knowledge and identity in the spheres of production and consumption, and on the other to work by geographers that has sought to map the scale and significance of the cultural or creative industries as new drivers of the global economy. The paper considers the extent to which this work enables those engaged with urban cultural policy to get beyond some of the impasses that have arisen with the development of “creative cities” policies derived from the work of authors such as Richard Florida as well as the business management literature on clusters. It will frame these debates in the context of recent work by Michael Curtin on media capitals, and the question of whether cities in East Asia can emerge as media capitals from outside of the US-Europe-dominated transnational cultural axis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social media digital and technologies surround us. We are moving into an age of ubiquitous (that is everywhere) computing. New media and information and communication technologies already impact on many aspects of everyday life including work, home and leisure. These new technologies are influencing the way that we develop social networks; understand places and location; how we navigate our cities; how we provide information about utilities and services; developing new ways to engage and participate in our communities, in planning, in governance and other decisions. This paper presents the initial findings of the impacts that digital communication technologies are having on public urban spaces. It develops a contextual review the nexus between urban planning and technological developments with examples and case studies from around the world to highlight some of the potential directions for urban planning in Queensland and Australia. It concludes with some thought provoking discussion points for urban planners, architects, designers and placemakers on the future of urban informatics and urban design, questions such as: how technology can enhance ‘place’, how technology can be used to improve public participation, and how technology will change our requirements of public places?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care through the use of patient reported outcomes such as healthrelated quality of life. However, unless patients give consistent consideration to the health states that give meaning to measurement scales used to evaluate these constructs, longitudinal comparison of these measures may be invalid. This study aimed to investigate whether patients give consideration to a standard health state rating scale (EQ-VAS) and whether consideration of good and poor health state descriptors immediately changes their selfreport. Methods: A randomised crossover trial was implemented amongst hospitalised older adults (n = 151). Patients were asked to consider descriptions of extremely good (Description-A) and poor (Description-B) health states. The EQ-VAS was administered as a self-report at baseline, after the first descriptors (A or B), then again after the remaining descriptors (B or A respectively). At baseline patients were also asked if they had considered either EQVAS anchors. Results: Overall 106/151 (70%) participants changed their self-evaluation by ≥5 points on the 100 point VAS, with a mean (SD) change of +4.5 (12) points (p < 0.001). A total of 74/151 (49%) participants did not consider the best health VAS anchor, of the 77 who did 59 (77%) thought the good health descriptors were more extreme (better) then they had previously considered. Similarly 85/151 (66%) participants did not consider the worst health anchor of the 66 who did 63 (95%) thought the poor health descriptors were more extreme (worse) then they had previously considered. Conclusions: Health state self-reports may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised in research and clinical settings when interpreting subjective patient reported outcomes that are dependent on brief anchors for meaning. Trial Registration: Australian and New Zealand Clinical Trials Registry (#ACTRN12607000606482) http://www.anzctr. org.au

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lumia: art/light/motion is an exciting new media exhibition presented by State Library of Queensland in partnership with Queensland-based Kuuki collective artists Priscilla Bracks and Gavin Sade. The exhibition explored contemporary life and encourages thought about the future through an extraordinary collection of hand-crafted and interactive electronic creatures and installations. The beautifully crafted new media artworks in Lumia: art/light/motion combine the bespoke with art and technology to create strange but intriguing objects. Lumia invited audiences to play, learn and then ponder the way we live and the environmental and social implications of our choices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter we present a case study set in Beloi, a fishing village located on Ataúro Island, 30 km across the sea from Díli, capital of Timor-Leste (East-Timor). We explore the tension between tourism development, food security and marine conservation in a developing country context. In order to better understand the relationships between the social, ecological and economic issues that arise in tourism planning we use an approach and associated methodology based on storytelling, complexity theory and concept mapping. Through testing scenarios with this methodology we hope to evaluate which trade-offs are acceptable to local people in return for the hoped-for economic boost from increased tourist visitation and associated developments.