508 resultados para Claudio de Turín
Resumo:
The 21st century has been described as the “century of cities”. By 2030, 70 per cent of the world’s population will live in cities, with the most rapid urbanization occurring in the developing world. This paper will draw up geographer Ed Soja’s concept of the “spatial turn” in social theory to consider how the culture of cities can act as a catalyst to innovation and the development of new technologies. In doing so, the paper will develop a three-layered approach to culture as: the arts; the way of life of people and communities; and the embedded structure underpinning socio-economic relations. It will also consider technology at a three-layered element, including devices, practices and ‘logics’ of technology, or what the Greeks termed techne. The paper will consider recent approaches to urban cultural policy, including cluster development and creative cities, and suggest some alternatives, noting that a problem with current approaches is that they focus excessively upon production (clusters) or consumption (creative cities). It will also consider the development of digital creative industries such as games, and the strategies of different cities to develop an innovation culture.
Resumo:
Background On-site wastewater treatment system (OWTS) siting, design and management has traditionally been based on site specific conditions with little regard to the surrounding environment or the cumulative effect of other systems in the environment. The general approach has been to apply the same framework of standards and regulations to all sites equally, regardless of the sensitivity, or lack thereof, to the receiving environment. Consequently, this has led to the continuing poor performance and failure of on-site systems, resulting in environmental and public health consequences. As a result, there is increasing realisation that more scientifically robust evaluations in regard to site assessment and the underlying ground conditions are needed. Risk-based approaches to on-site system siting, design and management are considered the most appropriate means of improvement to the current standards and codes for on-site wastewater treatment systems. The Project Research in relation to this project was undertaken within the Gold Coast City Council region, the major focus being the semi-urban, rural residential and hinterland areas of the city that are not serviced by centralised treatment systems. The Gold Coast has over 15,000 on-site systems in use, with approximately 66% being common septic tank-subsurface dispersal systems. A recent study evaluating the performance of these systems within the Gold Coast area showed approximately 90% were not meeting the specified guidelines for effluent treatment and dispersal. The main focus of this research was to incorporate strong scientific knowledge into an integrated risk assessment process to allow suitable management practices to be set in place to mitigate the inherent risks. To achieve this, research was undertaken focusing on three main aspects involved with the performance and management of OWTS. Firstly, an investigation into the suitability of soil for providing appropriate effluent renovation was conducted. This involved detailed soil investigations, laboratory analysis and the use of multivariate statistical methods for analysing soil information. The outcomes of these investigations were developed into a framework for assessing soil suitability for effluent renovation. This formed the basis for the assessment of OWTS siting and design risks employed in the developed risk framework. Secondly, an assessment of the environmental and public health risks was performed specifically related the release of contaminants from OWTS. This involved detailed groundwater and surface water sampling and analysis to assess the current and potential risks of contamination throughout the Gold Coast region. Additionally, the assessment of public health risk incorporated the use of bacterial source tracking methods to identify the different sources of fecal contamination within monitored regions. Antibiotic resistance pattern analysis was utilised to determine the extent of human faecal contamination, with the outcomes utilised for providing a more indicative public health assessment. Finally, the outcomes of both the soil suitability assessment and ground and surface water monitoring was utilised for the development of the integrated risk framework. The research outcomes achieved through this project enabled the primary research aims and objects to be accomplished. This in turn would enable Gold Coast City Council to provide more appropriate assessment and management guidelines based on robust scientific knowledge which will ultimately ensure that the potential environmental and public health impacts resulting from on-site wastewater treatment is minimised. As part of the implementation of suitable management strategies, a critical point monitoring program (CPM) was formulated. This entailed the identification of the key critical parameters that contribute to the characterised risks at monitored locations within the study area. The CPM will allow more direct procedures to be implemented, targeting the specific hazards at sensitive areas throughout Gold Coast region.
Resumo:
Background The onsite treatment of sewage and effluent disposal is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. It is important therefore that careful consideration is given to the design and location of onsite sewage treatment systems. This requires an understanding of the factors that influence treatment performance. The use of subsurface absorption systems is the most common form of effluent disposal for onsite sewage treatment, particularly for septic tanks. Also, in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Site specific factors play a key role in the onsite treatment of sewage. The project The primary aims of the research project were: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to evaluate current research relating to onsite sewage treatment; and, • to identify key issues where currently there is a lack of relevant research. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of this research project has been on septic tanks. By implication, the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. Five broad categories of soil types prevalent in the Brisbane region have been considered in this project. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each of the different soil types. In the initial phase of the investigation, the majority of the systems evaluated were septic tanks. However, a small number of aerobic wastewater treatment systems (AWTS) were also included. The primary aim was to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of each different type of system investigated was relatively small. Consequently, this does not permit a statistical analysis to be undertaken of the results obtained for comparing different systems. This is an important issue considering the large number of soil physico-chemical parameters and landscape factors that can influence treatment performance and their wide variability. The report This report is the last in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The project component discussed in the current report outlines the detailed soil investigations undertaken at a selected number of sites. In the initial field sampling, a number of soil chemical properties were assessed as indicators to investigate the extent of effluent flow and to help understand what soil factors renovate the applied effluent. The soil profile attributes, especially texture, structure and moisture regime were examined more in an engineering sense to determine the effect of movement of water into and through the soil. It is important to note that it is not only the physical characteristics, but also the chemical characteristics of the soil as well as landscape factors play a key role in the effluent renovation process. In order to understand the complex processes taking place in a subsurface effluent disposal area, influential parameters were identified using soil chemical concepts. Accordingly, the primary focus of this final phase of the research project was to identify linkages between various soil chemical parameters and landscape patterns and their contribution to the effluent renovation process. The research outcomes will contribute to the development of robust criteria for evaluating the performance of subsurface effluent disposal systems. The outcomes The key findings from the soil investigations undertaken are: • Effluent renovation is primarily undertaken by a combination of various soil physico-chemical parameters and landscape factors, thereby making the effluent renovation processes strongly site dependent. • Decisions regarding site suitability for effluent disposal should not be based purely in terms of the soil type. A number of other factors such as the site location in the catena, the drainage characteristics and other physical and chemical characteristics, also exert a strong influence on site suitability. • Sites, which are difficult to characterise in terms of suitability for effluent disposal, will require a detailed soil physical and chemical analysis to be undertaken to a minimum depth of at least 1.2 m. • The Ca:Mg ratio and Exchangeable Sodium Percentage are important parameters in soil suitability assessment. A Ca:Mg ratio of less than 0.5 would generally indicate a high ESP. This in turn would mean that Na and possibly Mg are the dominant exchangeable cations, leading to probable clay dispersion. • A Ca:Mg ratio greater than 0.5 would generally indicate a low ESP in the profile, which in turn indicates increased soil stability. • In higher clay percentage soils, low ESP can have a significant effect. • The presence of high exchangeable Na can be counteracted by the presence of swelling clays, and an exchange complex co-dominated by exchangeable Ca and exchangeable Mg. This aids absorption of cations at depth, thereby reducing the likelihood of dispersion. • Salt is continually added to the soil by the effluent and problems may arise if the added salts accumulate to a concentration that is harmful to the soil structure. Under such conditions, good drainage is essential in order to allow continuous movement of water and salt through the profile. Therefore, for a site to be sustainable, it would have a maximum application rate of effluent. This would be dependent on subsurface characteristics and the surface area available for effluent disposal. • The dosing regime for effluent disposal can play a significant role in the prevention of salt accumulation in the case of poorly draining sites. Though intermittent dosing was not considered satisfactory for the removal of the clogging mat layer, it has positive attributes in the context of removal of accumulated salts in the soil.
Resumo:
Aims: This study investigated the association between the basal (rest) insulin-signaling proteins, Akt, and the Akt substrate AS160, metabolic risk factors, inflammatory markers and aerobic fitness, in middle-aged women with varying numbers of metabolic risk factors for type 2 diabetes. Methods: Sixteen women (n = 16) aged 51.3+/-5.1 (mean +/-SD) years provided muscle biopsies and blood samples at rest. In addition, anthropometric characteristics and aerobic power were assessed and the number of metabolic risk factors for each participant was determined (IDF criteria). Results: The mean number of metabolic risk factors was 1.6+/-1.2. Total Akt was negatively correlated with IL-1 beta (r = -0.45, p = 0.046), IL-6 (r = -0.44, p = 0.052) and TNF-alpha (r = -0.51, p = 0.025). Phosphorylated AS160 was positively correlated with HDL (r = 0.58, p = 0.024) and aerobic fitness (r = 0.51, p = 0.047). Furthermore, a multiple regression analysis revealed that both HDL (t = 2.5, p = 0.032) and VO(2peak) (t = 2.4, p = 0.037) were better predictors for phosphorylated AS160 than TNF-alpha or IL-6 (p>0.05). Conclusions: Elevated inflammatory markers and increased metabolic risk factors may inhibit insulin-signaling protein phosphorylation in middle-aged women, thereby increasing insulin resistance under basal conditions. Furthermore, higher HDL and fitness levels are associated with an increased AS160 phosphorylation, which may in turn reduce insulin resistance.
Resumo:
The Australian Securities Exchange (ASX) listing rule 3.1 requires listed companies to immediately disclose price sensitive information to the market via the ASX’s Company Announcements Platform (CAP) prior to release through other disclosure channels. Since 1999, to improve the communication process, the ASX has permitted third-party mediation in the disclosure process that leads to the release of an Open Briefing (OB) through CAP. An OB is an interview between senior executives of the firm and an Open Briefing analyst employed by Orient Capital Pty Ltd (broaching topics such as current profit and outlook). Motivated by an absence of research on factors that influence firms to use OBs as a discretionary disclosure channel, this study examines (1) Why do firms choose to release information to the market via OBs?, (2) What are the firm characteristics that explain the discretionary use of OBs as a disclosure channel?, and (3) What are the disclosure attributes that influence firms’ decisions to regularly use OBs as a disclosure channel? Based on agency and information economics theories, a theoretical framework is developed to address research questions. This theoretical framework comprises disclosure environments such as firm characteristics and external factors, disclosure attributes and disclosure consequences. In order to address the first research question, the study investigates (i) the purpose of using OBs, (2) whether firms use OBs to provide information relating to previous public announcements, and (3) whether firms use OBs to provide routine or non-routine disclosures. In relation to the second and third research questions, hypotheses are developed to test factors expected to explain the discretionary use of OBs and firms’ decisions to regularly use OBs, and to explore the factors influencing the nature of OB disclosure. Content analysis and logistic regression models are used to investigate the research questions and test the hypotheses. Data are drawn from a hand-collected population of 1863 OB announcements issued by 239 listed firms between 2000 and 2010. The results show that types of information disclosed via an OB announcement are principally on matters relating to corporate strategies and performance and outlook. Most OB announcements are linked with a previous related announcement, with the lag between announcements significantly longer for loss-making firms than profitmaking firms. The main results show that firms which tend to be larger, have an analyst following, and have higher growth opportunities, are more likely to release OBs. Further, older firms and firms that release OB announcements containing good news, historical information and less complex information tend to be regular OB users. Lastly, firms more likely to disclose strategic information via OBs tend to operate in industries facing greater uncertainty, do not have analysts following, and have higher growth opportunities are less likely to disclose good news, historical information and complex information via OBs. This study is expected to contribute to disclosure literature in terms of disclosure attributes and firm characteristics that influence behaviour in this unique (OB) disclosure channel. With regard to practical significance, regulators can gain an understanding of how OBs are disclosed which can assist them in monitoring the use of OBs and improving the effectiveness of communications with stakeholders. In addition, investors can have a better comprehension of information contained in OB announcements, which may in turn better facilitate their investment decisions.
Resumo:
Purpose: The prevalence of refractive errors in children has been extensively researched. Comparisons between studies can, however, be compromised because of differences between accommodation control methods and techniques used for measuring refractive error. The aim of this study was to compare spherical refractive error results obtained at baseline and using two different accommodation control methods – extended optical fogging and cycloplegia, for two measurement techniques – autorefraction and retinoscopy. Methods: Participants comprised twenty-five school children aged between 6 and 13 years (mean age: 9.52 ± 2.06 years). The refractive error of one eye was measured at baseline and again under two different accommodation control conditions: extended optical fogging (+2.00DS for 20 minutes) and cycloplegia (1% cyclopentolate). Autorefraction and retinoscopy were both used to measure most plus spherical power for each condition. Results: A significant interaction was demonstrated between measurement technique and accommodation control method (p = 0.036), with significant differences in spherical power evident between accommodation control methods for each of the measurement techniques (p < 0.005). For retinoscopy, refractive errors were significantly more positive for cycloplegia compared to optical fogging, which were in turn significantly more positive than baseline, while for autorefraction, there were significant differences between cycloplegia and extended optical fogging and between cycloplegia and baseline only. Conclusions: Determination of refractive error under cycloplegia elicits more plus than using extended optical fogging as a method to relax accommodation. These findings support the use of cycloplegic refraction compared with extended optical fogging as a means of controlling accommodation for population based refractive error studies in children.
Resumo:
Purpose. The purpose of the study was to investigate the changes in axial length occurring with shifts in gaze direction. Methods. Axial length measurements were obtained from the left eye of 30 young adults (10 emmetropes, 10 low myopes, and 10 moderate myopes) through a rotating prism with 15° deviation, along the foveal axis, using a noncontact optical biometer in each of the nine different cardinal directions of gaze over 5 minutes. The subject's fellow eye fixated on an external distance (6 m) target to control accommodation, also with 15° deviation. Axial length measurements were also performed in 15° and 25° downward gaze with the biometer inclined on a tilting table, allowing gaze shifts to be achieved with either full head turn but no eye turn, or full eye turn with no head turn. Results. There was a significant influence of gaze angle and time on axial length (both P < 0.001), with the greatest axial elongation (+18 ± 8 μm) occurring with inferonasal gaze (P < 0.001) and a slight decrease in axial length in superior gaze (−12 ± 17 μm) compared with primary gaze (P < 0.001). In downward gaze, a significant axial elongation occurred when eye turn was used (P < 0.001), but not when head turn was used to shift gaze (P > 0.05). Conclusions. The angle of gaze has a small but significant short-term effect on axial length, with greatest elongation occurring in inferonasal gaze. The elongation of the eye appears to be due to the influence of the extraocular muscles, in particular the oblique muscles.
Resumo:
A fundamental prerequisite of population health research is the ability to establish an accurate denominator. This in turn requires that every individual in the study population is counted. However, this seemingly simple principle has become a point of conflict between researchers whose aim is to produce evidence of disparities in population health outcomes and governments whose policies promote(intentionally or not) inequalities that are the underlying causes of health disparities. Research into the health of asylum seekers is a case in point. There is a growing body of evidence documenting the adverse affects of recent changes in asylum-seeking legislation, including mandatory detention. However, much of this evidence has been dismissed by some governments as being unsound, biased and unscientific because, it is argued, evidence is derived from small samples or from case studies. Yet, it is the policies of governments that are the key barrier to the conduct of rigorous population health research on asylum seekers. In this paper, the authors discuss the challenges of counting asylum seekers and the limitations of data reported in some industrialized countries. They argue that the lack of accurate statistical data on asylum seekers has been an effective neo-conservative strategy for erasing the health inequalities in this vulnerable population, indeed a strategy that renders invisible this population. They describe some alternative strategies that may be used by researchers to obtain denominator data on hard-to-reach populations such as asylum seekers.
Resumo:
The purpose of this paper is to analyse the complex nature of practice within Artistic research. This will be done by considering practice through the lens of Bourdieu’s conceptualisation of practice. The focus of the paper is on developing an understanding of practice-led approaches to research and how these are framed by what Coessens et al. (2009) call the artistic turn in research. The paper begins with a brief introduction to the nature of practice and then continues on to discuss the broader field of artistic research, describing the environment which has shaped its evolution and foregrounding several of its key dispositions. The paper aims to not simply describe existing methodology but to rethink what is meant by artistic research and practice-led strategies.
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
A key strategy in facilitating learning in Open Disclosure training is the use of hypothetical, interactive scenarios called ‘simulations’. According to Clapper (2010), the ‘advantages of using simulation are numerous and include the ability to help learners make meaning of complex tasks, while also developing critical thinking and cultural skills’. Simulation, in turn, functions largely through improvisation and role-play, in which participants ‘act out’ particular roles and characters according to a given scenario, without recourse to a script. To maximise efficacy in the Open Disclosure training context, role-play requires the specialist skills of professionally trained actors. Core capacities that professional actors bring to the training process include (among others) believability, an observable and teachable skill which underpins the western traditions of actor training; and flexibility, which pertains to the actor’s ability to vary performance strategies according to the changing dynamics of the learning situation. The Patient Safety and Quality Improvement Service of Queensland Health utilises professional actors as a key component of their Open Disclosure Training Program. In engaging actors in this work, it is essential that Facilitators of Open Disclosure training have a solid understanding of the acting process: what acting is; how actors work to a brief; how they improvise; and how they sustainably manage a wide range of emotional states. In the simulation context, the highly skilled actor can optimise learning outcomes by adopting or enacting – in collaboration with the Facilitator - a pedagogical function.
Resumo:
Hematopoietic stem cell (HSC) transplant is a well established curative therapy for some hematological malignancies. However, achieving adequate supply of HSC from some donor tissues can limit both its application and ultimate efficacy. The theory that this limitation could be overcome by expanding the HSC population before transplantation has motivated numerous laboratories to develop ex vivo expansion processes. Pioneering work in this field utilized stromal cells as support cells in cocultures with HSC to mimic the HSC niche. We hypothesized that through translation of this classic coculture system to a three-dimensional (3D) structure we could better replicate the niche environment and in turn enhance HSC expansion. Herein we describe a novel high-throughput 3D coculture system where murine-derived HSC can be cocultured with mesenchymal stem/stromal cells (MSC) in 3D microaggregates—which we term “micromarrows.” Micromarrows were formed using surface modified microwells and their ability to support HSC expansion was compared to classic two-dimensional (2D) cocultures. While both 2D and 3D systems provide only a modest total cell expansion in the minimally supplemented medium, the micromarrow system supported the expansion of approximately twice as many HSC candidates as the 2D controls. Histology revealed that at day 7, the majority of bound hematopoietic cells reside in the outer layers of the aggregate. Quantitative polymerase chain reaction demonstrates that MSC maintained in 3D aggregates express significantly higher levels of key hematopoietic niche factors relative to their 2D equivalents. Thus, we propose that the micromarrow platform represents a promising first step toward a high-throughput HSC 3D coculture system that may enable in vitro HSC niche recapitulation and subsequent extensive in vitro HSC self-renewal.
Resumo:
Mass production of PhD training compromises graduate quality. As PhD quality becomes more stratified, industry will increasingly turn to quality-branded institutions and programs when distinguishing among job candidates.
Resumo:
While a rich body of literature in television and film studies and media policy studies has tended to focus on the media activities in the formal sector, we know much less about informal media activities, its influence on state policies, as well as the dynamics between the formal and the informal sectors. This article examines these issues with reference to a particularly revealing period following a large-scale government crackdown on peer-to-peer video sharing sites in China in 2008. By analyzing the aim and consequences of the state action, I point to the counter-productive effect in terms of cultural loss and the resurgence of offline piracy; and show the positive impact on forcing the informal into the formal sector, and pressuring the formal to innovate. Meanwhile, an increasing rapprochement between professional and user-created content leads to a new relationship between formal and informal sectors. This case demonstrates the importance of considering the dynamics between the two sectors. It also offers compelling evidence of the role of the informal sector in engendering state action, which in turn impacted on the co-evolution of the formal and the informal sectors.
Resumo:
This project investigates musicalisation and intermediality in the writing and devising of composed theatre. Its research question asks “How does the narrative of a musical play differ when it emerges from a setlist of original songs?”, the aim being to create performance event that is neither music nor theatre. This involves composition of lyrics, music, action and spoken text, projected image: gathered in a script and presented in performance. Scholars such as Kulezic-Wilson(in Kendrick, L and Roesner, D 2011:34) outline the acoustic dimension to the ‘performative turn’ (Mungen, Ernst and Bentzweizer, 2012) as heralding “…a shift of emphasis on how meaning is created (and veiled) and how the spectrum of theatrical creation and reception is widened.” Rebstock and Roesner (2012) capture approaches similar to this, building on Lehmann’s work in the post-dramatic under the new term ‘composed theatre’. This practice led research draws influence from these new theoretical frames, pushing beyond ‘the musical’. Springing from a set of original songs in dialogue with performed narrative, Bear with Me is a 45 minute music driven work for children, involving projected image and participatory action. Bear with Me’s intermedial hybrid of theatrical, screen and concert presentations shows that a simple setlist of original songs can be the starting point for the structure of a complex intermedial performance. Bear with Me was programmed into the Queensland Performing Arts Centre’s Out of the Box Festival. It was first performed in the Tony Gould Gallery at the Queensland in June 2012. The season sold out. A masterclass on my playwriting methodology was presented at the Connecting The Dots Symposium which ran alongside the festival.