932 resultados para Sit-stand


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Groundwater is increasingly recognised as an important yet vulnerable natural resource, and a key consideration in water cycle management. However, communication of sub-surface water system behaviour, as an important part of encouraging better water management, is visually difficult. Modern 3D visualisation techniques can be used to effectively communicate these complex behaviours to engage and inform community stakeholders. Most software developed for this purpose is expensive and requires specialist skills. The Groundwater Visualisation System (GVS) developed by QUT integrates a wide range of surface and sub-surface data, to produce a 3D visualisation of the behaviour, structure and connectivity of groundwater/surface water systems. Surface data (elevation, surface water, land use, vegetation and geology) and data collected from boreholes (bore locations and subsurface geology) are combined to visualise the nature, structure and connectivity of groundwater/surface water systems. Time-series data (water levels, groundwater quality, rainfall, stream flow and groundwater abstraction) is displayed as an animation within the 3D framework, or graphically, to show water system condition changes over time. GVS delivers an interactive, stand-alone 3D Visualisation product that can be used in a standard PC environment. No specialised training or modelling skills are required. The software has been used extensively in the SEQ region to inform and engage both water managers and the community alike. Examples will be given of GVS visualisations developed in areas where there have been community concerns around groundwater over-use and contamination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tamborine Mt area is a popular residential and tourist area in the Gold Coast hinterland, SE Qld. The 15km2 area occurs on elevated remnant Tertiary Basalts of the Beechmont Group, which comprise a number of mappable flow units originally derived from the Tweed volcanic centre to the south. The older Albert Basalt (Tertiary), which underlies the Beechmont Basalt at the southern end of the investigation area, is thought to be derived from the Focal Peak volcanic centre to the south west. The Basalts contain a locally significant ‘un-declared’ groundwater resource, which is utilised by the Tamborine Mt community for: • domestic purposes to supplement rainwater tank supplies, • commercial scale horticulture and • commercial export off-Mountain for bottled water. There is no reticulated water supply, and all waste water is treated on-site through domestic scale WTPs. Rainforest and other riparian ecosystems that attract residents and tourist dollars to the area, are also reliant on the groundwater that discharges to springs and surface streams on and around the plateau. Issues regarding a lack of compiled groundwater information, groundwater contamination, and groundwater sustainability are being investigated by QUT, utilising funding provided by the Federal Government’s ‘Caring for our Country’ programme through SEQ Catchments Ltd. The objectives of the two year project, which started in April 2009, are to: • Characterise the nature and condition of groundwater / surface water systems in the Tamborine Mountain area in terms of the issues being raised; • Engage and build capacity within the community to source local knowledge, encourage participation, raise awareness and improve understanding of the impacts of land and water use; • Develop a stand-alone 3D Visualisation model for dissemination into the community and use as a communication tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polarising the issue of governance is the increasingly acknowledged role of airports in regional economic development, both as significant sources of direct employment and as attractants of commerce through enhanced mobility (Vickerman, Spiekermann & Wegener 1999; Hakfoort, Poot & Rietveld 2001). Most airports were once considered spatially removed from their cities, but as cities have expanded their airports no longer sit distinct of the urban environment. This newfound spatial proximity means that decisions for land use and development on either city or airport land are likely to have impacts that affect one another in either or both the short- or long-term (Stevens, Baker and Freestone 2007). These impacts increase the demand for decision making to find ways of integrating strategies for future development to ensure that airport developments do not impede the sustainable growth of its city, and likewise that city developments do not impede the sustainable growth of its airport (Gillen 2006). However questions of how, under what conditions, and to what extent decision making integration might be suitable for “airport regions” are yet to be explored let alone answered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of mobile and ubiquitous computing technology has created what is often referred to as the hybrid space – a virtual layer of digital information and interaction opportunities that sit on top of and augment the physical environment. Embodied media materialise digital information as observable and sometimes interactive parts of the physical environment. The aim of this work is to explore ways to enhance people’s situated real world experience, and to find out what the role and impact of embodied media in achieving this goal can be. The Edge, an initiative of the State Library of Queensland in Brisbane, Australia, and case study of this thesis, envisions to be a physical place for people to meet, explore, experience, learn and teach each other creative practices in various areas related to digital technology and arts. Guided by an Action Research approach, this work applies Lefebvre’s triad of space (1991) to investigate the Edge as a social space from a conceived, perceived and lived point of view. Based on its creators’ vision and goals on the conceived level, different embodied media are iteratively designed, implemented and evaluated towards shaping and amplifying the Edge’s visitor experience on the perceived and lived level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Curriculum demands continue to increase on school education systems with teachers at the forefront of implementing syllabus requirements. Education is reported frequently as a solution to most societal problems and, as a result of the world’s information explosion, teachers are expected to cover more and more within teaching programs. How can teachers combine subjects in order to capitalise on the competing educational agendas within school timeframes? Fusing curricula requires the bonding of standards from two or more syllabuses. Both technology and ICT complement the learning of science. This study analyses selected examples of preservice teachers’ overviews for fusing science, technology and ICT. These program overviews focused on primary students and the achievement of two standards (one from science and one from either technology or ICT). These primary preservice teachers’ fused-curricula overviews included scientific concepts and related technology and/or ICT skills and knowledge. Findings indicated a range of innovative curriculum plans for teaching primary science through technology and ICT, demonstrating that these subjects can form cohesive links towards achieving the respective learning standards. Teachers can work more astutely by fusing curricula; however further professional development may be required to advance thinking about these processes. Bonding subjects through their learning standards can extend beyond previous integration or thematic work where standards may not have been assessed. Education systems need to articulate through syllabus documents how effective fusing of curricula can be achieved. It appears that education is a key avenue for addressing societal needs, problems and issues. Education is promoted as a universal solution, which has resulted in curriculum overload (Dare, Durand, Moeller, & Washington, 1997; Vinson, 2001). Societal and curriculum demands have placed added pressure on teachers with many extenuating education issues increasing teachers’ workloads (Mobilise for Public Education, 2002). For example, as Australia has weather conducive for outdoor activities, social problems and issues arise that are reported through the media calling for action; consequently schools have been involved in swimming programs, road and bicycle safety programs, and a wide range of activities that had been considered a parental responsibility in the past. Teachers are expected to plan, implement and assess these extra-curricula activities within their already overcrowded timetables. At the same stage, key learning areas (KLAs) such as science and technology are mandatory requirements within all Australian education systems. These systems have syllabuses outlining levels of content and the anticipated learning outcomes (also known as standards, essential learnings, and frameworks). Time allocated for teaching science in obviously an issue. In 2001, it was estimated that on average the time spent in teaching science in Australian Primary Schools was almost an hour per week (Goodrum, Hackling, & Rennie, 2001). More recently, a study undertaken in the U.S. reported a similar finding. More than 80% of the teachers in K-5 classrooms spent less than an hour teaching science (Dorph, Goldstein, Lee, et al., 2007). More importantly, 16% did not spend teaching science in their classrooms. Teachers need to learn to work smarter by optimising the use of their in-class time. Integration is proposed as one of the ways to address the issue of curriculum overload (Venville & Dawson, 2005; Vogler, 2003). Even though there may be a lack of definition for integration (Hurley, 2001), curriculum integration aims at covering key concepts in two or more subject areas within the same lesson (Buxton & Whatley, 2002). This implies covering the curriculum in less time than if the subjects were taught separately; therefore teachers should have more time to cover other educational issues. Expectedly, the reality can be decidedly different (e.g., Brophy & Alleman, 1991; Venville & Dawson, 2005). Nevertheless, teachers report that students expand their knowledge and skills as a result of subject integration (James, Lamb, Householder, & Bailey, 2000). There seems to be considerable value for integrating science with other KLAs besides aiming to address teaching workloads. Over two decades ago, Cohen and Staley (1982) claimed that integration can bring a subject into the primary curriculum that may be otherwise left out. Integrating science education aims to develop a more holistic perspective. Indeed, life is not neat components of stand-alone subjects; life integrates subject content in numerous ways, and curriculum integration can assist students to make these real-life connections (Burnett & Wichman, 1997). Science integration can provide the scope for real-life learning and the possibility of targeting students’ learning styles more effectively by providing more than one perspective (Hudson & Hudson, 2001). To illustrate, technology is essential to science education (Blueford & Rosenbloom, 2003; Board of Studies, 1999; Penick, 2002), and constructing technology immediately evokes a social purpose for such construction (Marker, 1992). For example, building a model windmill requires science and technology (Zubrowski, 2002) but has a key focus on sustainability and the social sciences. Science has the potential to be integrated with all KLAs (e.g., Cohen & Staley, 1982; Dobbs, 1995; James et al., 2000). Yet, “integration” appears to be a confusing term. Integration has an educational meaning focused on special education students being assimilated into mainstream classrooms. The word integration was used in the late seventies and generally focused around thematic approaches for teaching. For instance, a science theme about flight only has to have a student drawing a picture of plane to show integration; it did not connect the anticipated outcomes from science and art. The term “fusing curricula” presents a seamless bonding between two subjects; hence standards (or outcomes) need to be linked from both subjects. This also goes beyond just embedding one subject within another. Embedding implies that one subject is dominant, while fusing curricula proposes an equal mix of learning within both subject areas. Primary education in Queensland has eight KLAs, each with its established content and each with a proposed structure for levels of learning. Primary teachers attempt to cover these syllabus requirements across the eight KLAs in less than five hours a day, and between many of the extra-curricula activities occurring throughout a school year (e.g., Easter activities, Education Week, concerts, excursions, performances). In Australia, education systems have developed standards for all KLAs (e.g., Education Queensland, NSW Department of Education and Training, Victorian Education) usually designated by a code. In the late 1990’s (in Queensland), “core learning outcomes” for strands across all KLA’s. For example, LL2.1 for the Queensland Education science syllabus means Life and Living at Level 2 standard number 1. Thus, a teacher’s planning requires the inclusion of standards as indicated by the presiding syllabus. More recently, the core learning outcomes were replaced by “essential learnings”. They specify “what students should be taught and what is important for students to have opportunities to know, understand and be able to do” (Queensland Studies Authority, 2009, para. 1). Fusing science education with other KLAs may facilitate more efficient use of time and resources; however this type of planning needs to combine standards from two syllabuses. To further assist in facilitating sound pedagogical practices, there are models proposed for learning science, technology and other KLAs such as Bloom’s Taxonomy (Bloom, 1956), Productive Pedagogies (Education Queensland, 2004), de Bono’s Six Hats (de Bono, 1985), and Gardner’s Multiple Intelligences (Gardner, 1999) that imply, warrant, or necessitate fused curricula. Bybee’s 5 Es, for example, has five levels of learning (engage, explore, explain, elaborate, and evaluate; Bybee, 1997) can have the potential for fusing science and ICT standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gen Y beginning teachers have an edge: they’ve grown up in an era of educational accountability, so when their students have to sit a high-stakes test, they can relate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The travel and hospitality industry is one which relies especially crucially on word of mouth, both at the level of overall destinations (Australia, Queensland, Brisbane) and at the level of travellers’ individual choices of hotels, restaurants, sights during their trips. The provision of such word-of-mouth information has been revolutionised over the past decade by the rise of community-based Websites which allow their users to share information about their past and future trips and advise one another on what to do or what to avoid during their travels. Indeed, the impact of such user-generated reviews, ratings, and recommendations sites has been such that established commercial travel advisory publishers such as Lonely Planet have experienced a pronounced downturn in sales ¬– unless they have managed to develop their own ways of incorporating user feedback and contributions into their publications. This report examines the overall significance of ratings and recommendation sites to the travel industry, and explores the community, structural, and business models of a selection of relevant ratings and recommendations sites. We identify a range of approaches which are appropriate to the respective target markets and business aims of these organisations, and conclude that there remain significant opportunities for further operators especially if they aim to cater for communities which are not yet appropriately served by specific existing sites. Additionally, we also point to the increasing importance of connecting stand-alone ratings and recommendations sites with general social media spaces like Facebook, Twitter, and LinkedIn, and of providing mobile interfaces which enable users to provide updates and ratings directly from the locations they happen to be visiting. In this report, we profile the following sites: * TripAdvisor, the international market leader for travel ratings and recommendations sites, with a membership of some 11 million users; * IgoUgo, the other leading site in this field, which aims to distinguish itself from the market leader by emphasising the quality of its content; * Zagat, a long-established publisher of restaurant guides which has translated its crowdsourcing model from the offline to the online world; * Lonely Planet’s Thorn Tree site, which attempts to respond to the rise of these travel communities by similarly harnessing user-generated content; * Stayz, which attempts to enhance its accommodation search and booking services by incorporating ratings and reviews functionality; and * BigVillage, an Australian-based site attempting to cater for a particularly discerning niche of travellers; * Dopplr, which connects travel and social networking in a bid to pursue the lucrative market of frequent and business travellers; * Foursquare, which builds on its mobile application to generate a steady stream of ‘check-ins’ and recommendations for hospitality and other services around the world; * Suite 101, which uses a revenue-sharing model to encourage freelance writers to contribute travel writing (amongst other genres of writing); * Yelp, the global leader in general user-generated product review and recommendation services. In combination, these profiles provide an overview of current developments in the travel ratings and recommendations space (and beyond), and offer an outlook for further possibilities. While no doubt affected by the global financial downturn and the reduction in travel that it has caused, travel ratings and recommendations remain important – perhaps even more so if a reduction in disposable income has resulted in consumers becoming more critical and discerning. The aggregated word of mouth from many tens of thousands of travellers which these sites provide certainly has a substantial influence on their users. Using these sites to research travel options has now become an activity which has spread well beyond the digirati. The same is true also for many other consumer industries, especially where there is a significant variety of different products available – and so, this report may also be read as a case study whose findings are able to be translated, mutatis mutandis, to purchasing decisions from household goods through consumer electronics to automobiles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much of the research on the delivery of advice by professionals such as physicians, health workers and counsellors, both on the telephone and in face to face interaction more generally, has focused on the theme of client resistance and the consequent need for professionals to adopt particular formats to assist in the uptake of the advice. In this paper we consider one setting, Kid’s Helpline, the national Australian counselling service for children and young people, where there is an institutional mandate not to give explicit advice in accordance with the values of self-direction and empowerment. The paper examines one practice, the use of script proposals by counsellors, which appears to offer a way of providing support which is consistent with these values. Script proposals entail the counsellors packaging their advice as something that the caller might say – at some future time – to a third party such as a friend, teacher, parent, or partner, and involve the counsellor adopting the speaking position of the caller in what appears as a rehearsal of a forthcoming strip of interaction. Although the core feature of a script proposal is the counsellor’s use of direct reported speech they appear to be delivered, not so much as exact words to be followed, but as the type of conversation that the client needs to have with the 3rd party. Script proposals, in short, provide models of what to say as well as alluding to how these could be emulated by the client. In their design script proposals invariably incorporate one or more of the most common rhetorical formats for maximising the persuasive force of an utterance such as a three part list or a contrastive pair. Script proposals, moreover, stand in a complex relation to the prior talk and one of their functions appears to be to summarise, respecify or expand upon the client’s own ideas or suggestions for problem solving that have emerged in these preceding sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This full day workshop invites participants to consider the nexus where the interests of game design, the expectations of play and HCI meet: the game interface. Game interfaces seem different to the interface to other software and there have been a number of observations. Shneiderman famously noticed that while most software designers are intent on following the tenets of the “invisible computer” and making access easy for the user, games inter-faces are made for players: they embed challenge. Schell discusses a “strange” relationship between the player and the game enabled by the interface and user interface designers frequently opine that much can be learned from the design of game interfaces. So where does the game interface actually sit? Even more interesting is the question as to whether the history of the relationship and sub-sequent expectations are now limiting the potential of game design as an expressive form. Recent innovations in I/O design such as Nintendo’s Wii, Sony’s Move and Microsoft's Kinect seem to usher in an age of physical player-enabled interaction, experience and embodied, engaged design. This workshop intends to cast light on this often mentioned and sporadically examined area and to establish a platform for new and innovative design in the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As we’re moving toward the end of the year, it’s not hard to notice everyone starting to rush a bit more. So much so, that in some cases people can lose their cool when they’re out and about. Recently I viewed an episode of Jenny Brockie’s Insight program on SBS on the topic of “rage”. The program covered many areas of life, but it highlighted the issue of rage against taxi drivers in Melbourne and showed some archival footage of the recent taxi drivers’ protest on the issue, next to Flinders Street Station. Serendipitously, perhaps, I picked up Brisbane’s City News as I was eating lunch in town a few days later, and there was an article on Brisbane taxi stand supervisors, citing that some feared to go to work on Friday and Saturday nights as they were not infrequently assaulted by drunken revellers waiting in the long queues for their taxi ride home.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is predicted that with increased life expectancy in the developed world, there will be a greater demand for synthetic materials to repair or regenerate lost, injured or diseased bone (Hench & Thompson 2010). There are still few synthetic materials having true bone inductivity, which limits their application for bone regeneration, especially in large-size bone defects. To solve this problem, growth factors, such as bone morphogenetic proteins (BMPs), have been incorporated into synthetic materials in order to stimulate de novo bone formation in the center of large-size bone defects. The greatest obstacle with this approach is that the rapid diffusion of the protein from the carrier material, leading to a precipitous loss of bioactivity; the result is often insufficient local induction or failure of bone regeneration (Wei et al. 2007). It is critical that the protein is loaded in the carrier material in conditions which maintains its bioactivity (van de Manakker et al. 2009). For this reason, the efficient loading and controlled release of a protein from a synthetic material has remained a significant challenge. The use of microspheres as protein/drug carriers has received considerable attention in recent years (Lee et al. 2010; Pareta & Edirisinghe 2006; Wu & Zreiqat 2010). Compared to macroporous block scaffolds, the chief advantage of microspheres is their superior protein-delivery properties and ability to fill bone defects with irregular and complex shapes and sizes. Upon implantation, the microspheres are easily conformed to the irregular implant site, and the interstices between the particles provide space for both tissue and vascular ingrowth, which are important for effective and functional bone regeneration (Hsu et al. 1999). Alginates are natural polysaccharides and their production does not have the implicit risk of contamination with allo or xeno-proteins or viruses (Xie et al. 2010). Because alginate is generally cytocompatible, it has been used extensively in medicine, including cell therapy and tissue engineering applications (Tampieri et al. 2005; Xie et al. 2010; Xu et al. 2007). Calcium cross-linked alginate hydrogel is considered a promising material as a delivery matrix for drugs and proteins, since its gel microspheres form readily in aqueous solutions at room temperature, eliminating the need for harsh organic solvents, thereby maintaining the bioactivity of proteins in the process of loading into the microspheres (Jay & Saltzman 2009; Kikuchi et al. 1999). In addition, calcium cross-linked alginate hydrogel is degradable under physiological conditions (Kibat PG et al. 1990; Park K et al. 1993), which makes alginate stand out as an attractive candidate material for the protein carrier and bone regeneration (Hosoya et al. 2004; Matsuno et al. 2008; Turco et al. 2009). However, the major disadvantages of alginate microspheres is their low loading efficiency and also rapid release of proteins due to the mesh-like networks of the gel (Halder et al. 2005). Previous studies have shown that a core-shell structure in drug/protein carriers can overcome the issues of limited loading efficiencies and rapid release of drug or protein (Chang et al. 2010; Molvinger et al. 2004; Soppimath et al. 2007). We therefore hypothesized that introducing a core-shell structure into the alginate microspheres could solve the shortcomings of the pure alginate. Calcium silicate (CS) has been tested as a biodegradable biomaterial for bone tissue regeneration. CS is capable of inducing bone-like apatite formation in simulated body fluid (SBF) and its apatite-formation rate in SBF is faster than that of Bioglass® and A-W glass-ceramics (De Aza et al. 2000; Siriphannon et al. 2002). Titanium alloys plasma-spray coated with CS have excellent in vivo bioactivity (Xue et al. 2005) and porous CS scaffolds have enhanced in vivo bone formation ability compared to porous β-tricalcium phosphate ceramics (Xu et al. 2008). In light of the many advantages of this material, we decided to prepare CS/alginate composite microspheres by combining a CS shell with an alginate core to improve their protein delivery and mineralization for potential protein delivery and bone repair applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Evaluate effectiveness of new legislation Are children more likely to sit in the rear seat now than previously? ----- Are they more likely to wear an age-appropriate restraint? How easy is it for parents to comply (what are the barriers)?----- What more can be done? ----- ----- Design: 2 studies Study 1-observational 3 time phases (pre-legislation; post announcement; post enactment)----- Study 2-intercept interviews 2 time phases (post announcement; post enactment, same parents)----- Three data collection phases: T1 (before announcement, 2007) T2 (after announcement but before enactment, 2009-10) T3 (after the enactment, 2010)----- Two regional cities: Toowoomba, Rockhampton----- Site types Schools, shopping areas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper draws upon the Australian case to argue that the case for support for cultural production and cultural infrastructure has been strengthened overall by its alignment to economic policy goals. In this respect, the rise of creative industries policy discourses is consistent with trends in thinking about cultural policy that have their roots in the Creative Nation strategies of the early 1990s. In terms of the earlier discussion, cultural policy is as much driven by Schumpeterian principals as it is by Keynesian ones. Such an approach is not without attendant risks, and two stand out. The first is the risk of marginalizing the arts, through a policy framework that gives priority to developing the digital content industries, and viewing the creative industries as primarily an innovation platform. The second is that other trends in the economy, such as the strong Australian dollar resulting from the mining boom, undercuts the development of cultural production in the sections of the creative industries where international trade and investment is most significant, such as the film industry and computer games. Nonetheless, after over a decade of vibrant debate, this focus on linking the cultural and economic policy goals of the creative industries has come to be consistent with broader international trends in the field.