214 resultados para Autosomal-dominant Hemochromatosis
Resumo:
Recent studies have detected a dominant accumulation mode (~100 nm) in the Sea Spray Aerosol (SSA) number distribution. There is evidence to suggest that particles in this mode are composed primarily of organics. To investigate this hypothesis we conducted experiments on NaCl, artificial SSA and natural SSA particles with a Volatility-Hygroscopicity-Tandem-Differential-Mobility-Analyser (VH-TDMA). NaCl particles were atomiser generated and a bubble generator was constructed to produce artificial and natural SSA particles. Natural seawater samples for use in the bubble generator were collected from biologically active, terrestrially-affected coastal water in Moreton Bay, Australia. Differences in the VH-TDMA-measured volatility curves of artificial and natural SSA particles were used to investigate and quantify the organic fraction of natural SSA particles. Hygroscopic Growth Factor (HGF) data, also obtained by the VH-TDMA, were used to confirm the conclusions drawn from the volatility data. Both datasets indicated that the organic fraction of our natural SSA particles evaporated in the VH-TDMA over the temperature range 170–200°C. The organic volume fraction for 71–77 nm natural SSA particles was 8±6%. Organic volume fraction did not vary significantly with varying water residence time (40 secs to 24 hrs) in the bubble generator or SSA particle diameter in the range 38–173 nm. At room temperature we measured shape- and Kelvin-corrected HGF at 90% RH of 2.46±0.02 for NaCl, 2.35±0.02 for artifical SSA and 2.26±0.02 for natural SSA particles. Overall, these results suggest that the natural accumulation mode SSA particles produced in these experiments contained only a minor organic fraction, which had little effect on hygroscopic growth. Our measurement of 8±6% is an order of magnitude below two previous measurements of the organic fraction in SSA particles of comparable sizes. We stress that our results were obtained using coastal seawater and they can’t necessarily be applied on a regional or global ocean scale. Nevertheless, considering the order of magnitude discrepancy between this and previous studies, further research with independent measurement techniques and a variety of different seawaters is required to better quantify how much organic material is present in accumulation mode SSA.
Resumo:
Atmospheric ions are produced by many natural and anthropogenic sources and their concentrations vary widely between different environments. There is very little information on their concentrations in different types of urban environments, how they compare across these environments and their dominant sources. In this study, we measured airborne concentrations of small ions, particles and net particle charge at 32 different outdoor sites in and around a major city in Australia and identified the main ion sources. Sites were classified into seven groups as follows: park, woodland, city centre, residential, freeway, power lines and power substation. Generally, parks were situated away from ion sources and represented the urban background value of about 270 ions cm-3. Median concentrations at all other groups were significantly higher than in the parks. We show that motor vehicles and power transmission systems are two major ion sources in urban areas. Power lines and substations constituted strong unipolar sources, while motor vehicle exhaust constituted strong bipolar sources. The small ion concentration in urban residential areas was about 960 cm-3. At sites where ion sources were co-located with particle sources, ion concentrations were inhibited due to the ion-particle attachment process. These results improved our understanding on air ion distribution and its interaction with particles in the urban outdoor environment.
Resumo:
Neo-liberalism has become one of the boom concepts of our time. From its original reference point as a descriptor of the economics of the “Chicago School” such as Milton Friedman, or authors such as Friedrich von Hayek, neo-liberalism has become an all-purpose descriptor and explanatory device for phenomena as diverse as Bollywood weddings, standardized testing in schools, violence in Australian cinema, and the digitization of content in public libraries. Moreover, it has become an entirely pejorative term: no-one refers to their own views as “neo-liberal”, but it rather refers to the erroneous views held by others, whether they acknowledge this or not. Neo-liberalism as it has come to be used, then, bears many of the hallmarks of a dominant ideology theory in the classical Marxist sense, even if it is often not explored in these terms. This presentation will take the opportunity provided by the English language publication of Michel Foucault’s 1978-79 lectures, under the title of The Birth of Biopolitics, to consider how he used the term neo-liberalism, and how this equates with its current uses in critical social and cultural theory. It will be argued that Foucault did not understand neo-liberalism as a dominant ideology in these lectures, but rather as marking a point of inflection in the historical evolution of liberal political philosophies of government. It will also be argued that his interpretation of neo-liberalism was more nuanced and more comparative than the more recent uses of Foucault in the literature on neo-liberalism. It will also look at how Foucault develops comparative historical models of liberal capitalism in The Birth of Biopolitics, arguing that this dimension of his work has been lost in more recent interpretations, which tend to retro-fit Foucault to contemporary critiques of either U.S. neo-conservatism or the “Third Way” of Tony Blair’s New Labour in the UK.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
Principal Topic Venture ideas are at the heart of entrepreneurship (Davidsson, 2004). However, we are yet to learn what factors drive entrepreneurs’ perceptions of the attractiveness of venture ideas, and what the relative importance of these factors are for their decision to pursue an idea. The expected financial gain is one factor that will obviously influence the perceived attractiveness of a venture idea (Shepherd & DeTienne, 2005). In addition, the degree of novelty of venture ideas along one or more dimensions such as new products/services, new method of production, enter into new markets/customer and new method of promotion may affect their attractiveness (Schumpeter, 1934). Further, according to the notion of an individual-opportunity nexus venture ideas are closely associated with certain individual characteristics (relatedness). Shane (2000) empirically identified that individual’s prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy’s (2001; 2008) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. This study examines how entrepreneurs weigh considerations of different forms of novelty and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. Method I use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of novelty, relatedness and potential gain. The conjoint analysis estimates respondents’ preferences in terms of utilities (or part-worth) for each level of novelty, relatedness and potential gain of venture ideas. A sample of 32 expert entrepreneurs who were awarded young entrepreneurship awards were selected for the study. Each respondent was interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Results and Implications Results indicate that while the respondents do not prefer mere imitation they receive higher utility for low to medium degree of newness suggesting that high degrees of newness are fraught with greater risk and/or greater resource needs. Respondents pay considerable weight on alignment with the knowledge and skills they already posses in choosing particular venture idea. The initial resource position of entrepreneurs is not equally important. Even though expected potential financial gain gives substantial utility, result indicate that it is not a dominant factor for the attractiveness of venture idea.
Resumo:
This paper presents a conceptual framework, informed by Foucault’s work on governmentality, which allows for new kinds of reflection on the practice of legal education. Put simply, this framework suggests that legal education can be understood as a form of government that relies on a specific rationalisation and programming of the activities of legal educators, students, and administrators, and is implemented by harnessing specific techniques and bodies of ‘know-how’. Applying this framework to assessment at three Australian law schools, this paper highlights how assessment practices are rationalised, programmed, and implemented, and points out how this government shapes students’ legal personae. In particular, this analysis focuses on the governmental effects of pedagogical discourses that are dominant within the design and scholarship of legal education. It demonstrates that the development of pedagogically-sound regimes of assessment has contributed to a reformulation of the terrain of government, by providing the conditions under which forms of legal personae may be more effectively shaped, and extending the power relations that achieve this. This analysis provides legal educators with an original way of reflecting on the power effects of teaching the law, and new opportunities for thinking about what is possible in legal education.
Resumo:
Curriculum demands continue to increase on school education systems with teachers at the forefront of implementing syllabus requirements. Education is reported frequently as a solution to most societal problems and, as a result of the world’s information explosion, teachers are expected to cover more and more within teaching programs. How can teachers combine subjects in order to capitalise on the competing educational agendas within school timeframes? Fusing curricula requires the bonding of standards from two or more syllabuses. Both technology and ICT complement the learning of science. This study analyses selected examples of preservice teachers’ overviews for fusing science, technology and ICT. These program overviews focused on primary students and the achievement of two standards (one from science and one from either technology or ICT). These primary preservice teachers’ fused-curricula overviews included scientific concepts and related technology and/or ICT skills and knowledge. Findings indicated a range of innovative curriculum plans for teaching primary science through technology and ICT, demonstrating that these subjects can form cohesive links towards achieving the respective learning standards. Teachers can work more astutely by fusing curricula; however further professional development may be required to advance thinking about these processes. Bonding subjects through their learning standards can extend beyond previous integration or thematic work where standards may not have been assessed. Education systems need to articulate through syllabus documents how effective fusing of curricula can be achieved. It appears that education is a key avenue for addressing societal needs, problems and issues. Education is promoted as a universal solution, which has resulted in curriculum overload (Dare, Durand, Moeller, & Washington, 1997; Vinson, 2001). Societal and curriculum demands have placed added pressure on teachers with many extenuating education issues increasing teachers’ workloads (Mobilise for Public Education, 2002). For example, as Australia has weather conducive for outdoor activities, social problems and issues arise that are reported through the media calling for action; consequently schools have been involved in swimming programs, road and bicycle safety programs, and a wide range of activities that had been considered a parental responsibility in the past. Teachers are expected to plan, implement and assess these extra-curricula activities within their already overcrowded timetables. At the same stage, key learning areas (KLAs) such as science and technology are mandatory requirements within all Australian education systems. These systems have syllabuses outlining levels of content and the anticipated learning outcomes (also known as standards, essential learnings, and frameworks). Time allocated for teaching science in obviously an issue. In 2001, it was estimated that on average the time spent in teaching science in Australian Primary Schools was almost an hour per week (Goodrum, Hackling, & Rennie, 2001). More recently, a study undertaken in the U.S. reported a similar finding. More than 80% of the teachers in K-5 classrooms spent less than an hour teaching science (Dorph, Goldstein, Lee, et al., 2007). More importantly, 16% did not spend teaching science in their classrooms. Teachers need to learn to work smarter by optimising the use of their in-class time. Integration is proposed as one of the ways to address the issue of curriculum overload (Venville & Dawson, 2005; Vogler, 2003). Even though there may be a lack of definition for integration (Hurley, 2001), curriculum integration aims at covering key concepts in two or more subject areas within the same lesson (Buxton & Whatley, 2002). This implies covering the curriculum in less time than if the subjects were taught separately; therefore teachers should have more time to cover other educational issues. Expectedly, the reality can be decidedly different (e.g., Brophy & Alleman, 1991; Venville & Dawson, 2005). Nevertheless, teachers report that students expand their knowledge and skills as a result of subject integration (James, Lamb, Householder, & Bailey, 2000). There seems to be considerable value for integrating science with other KLAs besides aiming to address teaching workloads. Over two decades ago, Cohen and Staley (1982) claimed that integration can bring a subject into the primary curriculum that may be otherwise left out. Integrating science education aims to develop a more holistic perspective. Indeed, life is not neat components of stand-alone subjects; life integrates subject content in numerous ways, and curriculum integration can assist students to make these real-life connections (Burnett & Wichman, 1997). Science integration can provide the scope for real-life learning and the possibility of targeting students’ learning styles more effectively by providing more than one perspective (Hudson & Hudson, 2001). To illustrate, technology is essential to science education (Blueford & Rosenbloom, 2003; Board of Studies, 1999; Penick, 2002), and constructing technology immediately evokes a social purpose for such construction (Marker, 1992). For example, building a model windmill requires science and technology (Zubrowski, 2002) but has a key focus on sustainability and the social sciences. Science has the potential to be integrated with all KLAs (e.g., Cohen & Staley, 1982; Dobbs, 1995; James et al., 2000). Yet, “integration” appears to be a confusing term. Integration has an educational meaning focused on special education students being assimilated into mainstream classrooms. The word integration was used in the late seventies and generally focused around thematic approaches for teaching. For instance, a science theme about flight only has to have a student drawing a picture of plane to show integration; it did not connect the anticipated outcomes from science and art. The term “fusing curricula” presents a seamless bonding between two subjects; hence standards (or outcomes) need to be linked from both subjects. This also goes beyond just embedding one subject within another. Embedding implies that one subject is dominant, while fusing curricula proposes an equal mix of learning within both subject areas. Primary education in Queensland has eight KLAs, each with its established content and each with a proposed structure for levels of learning. Primary teachers attempt to cover these syllabus requirements across the eight KLAs in less than five hours a day, and between many of the extra-curricula activities occurring throughout a school year (e.g., Easter activities, Education Week, concerts, excursions, performances). In Australia, education systems have developed standards for all KLAs (e.g., Education Queensland, NSW Department of Education and Training, Victorian Education) usually designated by a code. In the late 1990’s (in Queensland), “core learning outcomes” for strands across all KLA’s. For example, LL2.1 for the Queensland Education science syllabus means Life and Living at Level 2 standard number 1. Thus, a teacher’s planning requires the inclusion of standards as indicated by the presiding syllabus. More recently, the core learning outcomes were replaced by “essential learnings”. They specify “what students should be taught and what is important for students to have opportunities to know, understand and be able to do” (Queensland Studies Authority, 2009, para. 1). Fusing science education with other KLAs may facilitate more efficient use of time and resources; however this type of planning needs to combine standards from two syllabuses. To further assist in facilitating sound pedagogical practices, there are models proposed for learning science, technology and other KLAs such as Bloom’s Taxonomy (Bloom, 1956), Productive Pedagogies (Education Queensland, 2004), de Bono’s Six Hats (de Bono, 1985), and Gardner’s Multiple Intelligences (Gardner, 1999) that imply, warrant, or necessitate fused curricula. Bybee’s 5 Es, for example, has five levels of learning (engage, explore, explain, elaborate, and evaluate; Bybee, 1997) can have the potential for fusing science and ICT standards.
Resumo:
“You need to be able to tell stories. Illustration is a literature, not a pure fine art. It’s the fine art of writing with pictures.” – Gregory Rogers. This paper reads two recent wordless picture books by Australian illustrator Gregory Rogers in order to consider how “Shakespeare” is produced as a complex object of consumption for the implied child reader: The Boy, The Bear, The Baron, The Bard (2004) and Midsummer Knight (2006). In these books other worlds are constructed via time-travel and travel to a fantasy world, and clearly presume reader competence in narrative temporality and structure, and cultural literacy (particularly in reference to Elizabethan London and William Shakespeare), even as they challenge normative concepts via use of the fantastic. Exploring both narrative sequences and individual images reveals a tension in the books between past and present, and real and imagined. Where children’s texts tend to privilege Shakespeare, the man and his works, as inherently valuable, Rogers’s work complicates any sense of cultural value. Even as these picture books depend on a lexicon of Shakespearean images for meaning and coherence, they represent William Shakespeare as both an enemy to children (The Boy), and a national traitor (Midsummer). The protagonists, a boy in the first book and the bear he rescues in the second, effect political change by defeating Shakespeare. However, where these texts might seem to be activating a postcolonial cultural critique, this is complicated both by presumed readerly competence in authorized cultural discourses and by repeated affirmation of monarchies as ideal political systems. Power, then, in these picture books is at once rewarded and withheld, in a dialectic of (possibly postcolonial) agency, and (arguably colonial) subjection, even as they challenge dominant valuations of “Shakespeare” they do not challenge understandings of the “Child”.
Resumo:
For almost a decade before Hollywood existed, French firm Pathe towered over the early film industry with estimates of its share of all films sold around the world varying between 50-70%. Pathe was the first global entertainment company. This paper analyses its rise to market leadership by applying a theoretical framework drawn from the business literature on causes of industry dominance, which provides insights into how firms acquire and maintain market dominance and in this case the film industry. This paper uses evidence presented by film historians to argue that Pathe "fits" the expected theoretical model of a dominant firm because it had a marketing orientation, used an effective quality-based competitive strategy and possessed the six critical marketing capabilities that business research shows enable the best performing firms to consistently outperform rivals.
Resumo:
This paper raises the question of whether comparative national models of communications research can be developed, along the lines of Hallin and Mancini’s (2004) analysis of comparative media policy, or the work of Perraton and Clift (2004) on comparative national capitalisms. Taking consideration of communications research in Australia and New Zealand as its starting point, the paper will consider what are relevant variables in shaping an “intellectual milieu” for communications research in these countries, as compared to those of Europe, North America and Asia. Some possibly relevant variables include: • Type of media system (e.g. how significant is public service media?); • Political culture (e.g. are there significant left-of-centre political parties?); • Dominant intellectual traditions; • Level and types of research funding; • Overall structure of higher education system, and where communications sits within it. In considering whether such an exercise can or should be undertaken, we can also evaluate, as Hallin and Mancini do, the significance of potentially homogenizing forces. These would include globalization, new media technologies, and the rise of a global “audit culture”. The paper will raise these issues as questions that emerge as we consider, as Curran and Park (2000) and Thussu (2009) have proposed, what a “de-Westernized” media and communications research paradigm may look like.
Resumo:
The connections between the development of creative industries and the growth of cities was noted by several sources over the 2000s, but explanations relating to the nature of the link have thus far provide to be insufficient. The two dominant ‘scripts’ were those of ‘creative clusters’ and ‘creative/cities/creative class’ theories, but both have proved to be insufficient, not least because they privilege amenities-led, supply-drive accounts of urban development that fail to adequately situate cities in wider global circuits of culture and economic production. It is proposed that the emergent field of cultural economic geography provides some insights into redressing these lacunae, particularly in the possibilities for an original synthesis of cultural and economic geography, cultural studies and new strands of economic theory.
Resumo:
This article reflects on aspects of what is claimed to be the distinctiveness of Australian communication, cultural and media studies, focusing on two cases – the cultural policy debate in the 1990s, and the concept of creative industries in the 2000s – and the relations between them, which highlight the alignment of research and scholarship with industry and policy and with which the author has been directly involved. Both ‘moments’ have been controversial; the three main lines of critique of such alignment of research and scholarship with industry and policy (its untoward proximity to tenets of the dominant neo-liberal ideology; the evacuation of cultural value by the economic; and the possible loss of critical vocation of the humanities scholar) are debated.
Resumo:
Since the 1970s the internationalisation process of firms has attracted wide research interest. One of the dominant explanations of firm internationalisation resulting from this research activity is the Uppsala stages model. In this paper, a pre-internationalisation phase is incorporated into the traditional Uppsala model to address the question: What are the antecedents of this model? Four concepts are proposed as the key components that define the experiential learning process underlying a firm’s pre-export phase: export stimuli, attitudinal/psychological commitment, resources and lateral rigidity. Through a survey of 290 Australian exporting and non-exporting small-medium sized firms, data relating to the four pre-internationalisation concepts is collected and an Export Readiness Index (ERI) is constructed through factor analysis. Using logistic regression, the ERI is tested as a tool for analysing export readiness among Australian SMEs.
Resumo:
Against a background of already thin markets in some sectors of major public sector infrastructure in Australia and the desire of Australian federal government to leverage private finance, concerns about ensuring sufficient levels of competition are prompting federal government to seek new sources of in-bound Foreign Direct Income. The aim of this paper is to justify and develop a means to deploying the eclectic paradigm of internationalisation that forms part of an Australian federally funded research project designed to explain the determinants of multinational contractors' willingness to bid for Australian public sector major infrastructure projects. Despite the dominance of the eclectic paradigm as a theory of internationalisation for over two decades, it has seen limited application in terms of multinational construction. It is expected that the research project will be the first empirical study to deploy the eclectic paradigm to inbound FDI to Australia whilst using the dominant economic theories advocated for use within the eclectic paradigm. Furthermore, the research project is anticipated to yield a number of practical benefits. These include estimates of the potential scope to attract more multinational contractors to bid for Australian public sector infrastructure, including the nature and extent to which this scope can be influenced by Australian governments responsible for the delivery of infrastructure. On the other hand, the research is also expected to indicate the extent to which indigenous and other multinational contractors domiciled in Australia are investing in special purpose technology and achieving productivity gains relative to foreign multinational contractors.
Resumo:
Neo-liberalism has become one of the boom concepts of our time. From its original reference point as a descriptor of the economics of the ‘Chicago School’ or authors such as Friedrich von Hayek, neo-liberalism has become an all-purpose concept, explanatory device and basis for social critique. This presentation evaluates Michel Foucault’s 1978–79 lectures, published as The Birth of Biopolitics, to consider how he used the term neo-liberalism, and how this equates with its current uses in critical social and cultural theory. It will be argued that Foucault did not understand neo-liberalism as a dominant ideology in these lectures, but rather as marking a point of inflection in the historical evolution of liberal political philosophies of government. It will also be argued that his interpretation of neo-liberalism was more nuanced and more comparative than more recent contributions. The article points towards an attempt to theorize comparative historical models of liberal capitalism.