62 resultados para AL-2004-1

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chronic leg ulcers cause significant pain, cost, decreased quality of life and morbidity for a considerable segment of the older population (Graham et al., 2003a). At any given time the prevalence of patients with open leg ulcers receiving treatment is between 0.11% - 0.18% (Briggs & Closs 2003). Chronic leg ulcers occur in approximately 1 - 2% of the over 60 population in the US, UK, Europe and Australia (Baker & Stacey 1994; Johnson 1995; Lees & Lambert 1992; Margolis et al. 2002). Considerable research has been undertaken to determine the best treatment practices that will aid in the management and the healing of these ulcers, and practical and effective strategies and techniques for healing venous leg ulcers have been trialled to demonstrate their beneficial effects (Nelson et al. 2004; Cullum et al. 2001)...

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on the volatile and hygroscopic properties of mixed aerosol species. In particular, the influence organic species of varying solubility have upon seed aerosols. Aerosol studies were conducted at the Paul Scherrer Institut Laboratory for Atmospheric Chemistry (PSI-LAC, Villigen, Switzerland) and at the Queensland University of Technology International Laboratory for Air Quality and Health (QUT-ILAQH, Brisbane, Australia). The primary measurement tool employed in this program was the Volatilisation and Hygroscopicity Tandem Differential Mobility Analyser (VHTDMA - Johnson et al. 2004). This system was initially developed at QUT within the ILAQH and was completely re-developed as part of this project (see Section 1.4 for a description of this process). The new VHTDMA was deployed to the PSI-LAC where an analysis of the volatile and hygroscopic properties of ammonium sulphate seeds coated with organic species formed from the photo-oxidation of á-pinene was conducted. This investigation was driven by a desire to understand the influence of atmospherically prevalent organics upon water uptake by material with cloud forming capabilities. Of particular note from this campaign were observed influences of partially soluble organic coatings upon inorganic ammonium sulphate seeds above and below their deliquescence relative humidity (DRH). Above the DRH of the seed increasing the volume fraction of the organic component was shown to reduce the water uptake of the mixed particle. Below the DRH the organic was shown to activate the water uptake of the seed. This was the first time this effect had been observed for á-pinene derived SOA. In contrast with the simulated aerosols generated at the PSI-LAC a case study of the volatile and hygroscopic properties of diesel emissions was undertaken. During this stage of the project ternary nucleation was shown, for the first time, to be one of the processes involved in formation of diesel particulate matter. Furthermore, these particles were shown to be coated with a volatile hydrophobic material which prevented the water uptake of the highly hygroscopic material below. This result was a first and indicated that previous studies into the hygroscopicity of diesel emission had erroneously reported the particles to be hydrophobic. Both of these results contradict the previously upheld Zdanovksii-Stokes-Robinson (ZSR) additive rule for water uptake by mixed species. This is an important contribution as it adds to the weight of evidence that limits the validity of this rule.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principal Topic: It is well known that most new ventures suffer from a significant lack of resources, which increases the risk of failure (Shepherd, Douglas and Shanley, 2000) and makes it difficult to attract stakeholders and financing for the venture (Bhide & Stevenson, 1999). The Resource-Based View (RBV) (Barney, 1991; Wernerfelt, 1984) is a dominant theoretical base increasingly drawn on within Strategic Management. While theoretical contributions applying RBV in the domain of entrepreneurship can arguably be traced back to Penrose (1959), there has been renewed attention recently (e.g. Alvarez & Busenitz, 2001; Alvarez & Barney, 2004). This said, empirical work is in its infancy. In part, this may be due to a lack of well developed measuring instruments for testing ideas derived from RBV. The purpose of this study is to develop a measurement scales that can serve to assist such empirical investigations. In so doing we will try to overcome three deficiencies in current empirical measures used for the application of RBV to the entrepreneurship arena. First, measures for resource characteristics and configurations associated with typical competitive advantages found in entrepreneurial firms need to be developed. These include such things as alertness and industry knowledge (Kirzner, 1973), flexibility (Ebben & Johnson, 2005), strong networks (Lee et al., 2001) and within knowledge intensive contexts, unique technical expertise (Wiklund and Shepard, 2003). Second, the RBV has the important limitations of being relatively static and modelled on large, established firms. In that context, traditional RBV focuses on competitive advantages. However, newly established firms often face disadvantages, especially those associated with the liabilities of newness (Aldrich & Auster, 1986). It is therefore important in entrepreneurial contexts to expand to an investigation of responses to competitive disadvantage through an RBV lens. Conversely, recent research has suggested that resource constraints actually have a positive effect on firm growth and performance under some circumstances (e.g., George, 2005; Katila & Shane, 2005; Mishina et al., 2004; Mosakowski, 2002; cf. also Baker & Nelson, 2005). Third, current empirical applications of RBV measured levels or amounts of particular resources available to a firm. They infer that these resources deliver firms competitive advantage by establishing a relationship between these resource levels and performance (e.g. via regression on profitability). However, there is the opportunity to directly measure the characteristics of resource configurations that deliver competitive advantage, such as Barney´s well known VRIO (Valuable, Rare, Inimitable and Organized) framework (Barney, 1997). Key Propositions and Methods: The aim of our study is to develop and test scales for measuring resource advantages (and disadvantages) and inimitability for entrepreneurial firms. The study proceeds in three stages. The first stage developed our initial scales based on earlier literature. Where possible, we adapt scales based on previous work. The first block of the scales related to the level of resource advantages and disadvantages. Respondents were asked the degree to which each resource category represented an advantage or disadvantage relative to other businesses in their industry on a 5 point response scale: Major Disadvantage, Slight Disadvantage, No Advantage or Disadvantage, Slight Advantage and Major Advantage. Items were developed as follows. Network capabilities (3 items) were adapted from (Madsen, Alsos, Borch, Ljunggren & Brastad, 2006). Knowledge resources marketing expertise / customer service (3 items) and technical expertise (3 items) were adapted from Wiklund and Shepard (2003). flexibility (2 items), costs (4 items) were adapted from JIBS B97. New scales were developed for industry knowledge / alertness (3 items) and product / service advantages. The second block asked the respondent to nominate the most important resource advantage (and disadvantage) of the firm. For the advantage, they were then asked four questions to determine how easy it would be for other firms to imitate and/or substitute this resource on a 5 point likert scale. For the disadvantage, they were asked corresponding questions related to overcoming this disadvantage. The second stage involved two pre-tests of the instrument to refine the scales. The first was an on-line convenience sample of 38 respondents. The second pre-test was a telephone interview with a random sample of 31 Nascent firms and 47 Young firms (< 3 years in operation) generated using a PSED method of randomly calling households (Gartner et al. 2004). Several items were dropped or reworded based on the pre-tests. The third stage (currently in progress) is part of Wave 1 of CAUSEE (Nascent Firms) and FEDP (Young Firms), a PSED type study being conducted in Australia. The scales will be tested and analysed with a random sample of approximately 700 Nascent and Young firms respectively. In addition, a judgement sample of approximately 100 high potential businesses in each category will be included. Findings and Implications: The paper will report the results of the main study (stage 3 – currently data collection is in progress) will allow comparison of the level of resource advantage / disadvantage across various sub-groups of the population. Of particular interest will be a comparison of the high potential firms with the random sample. Based on the smaller pre-tests (N=38 and N=78) the factor structure of the items confirmed the distinctiveness of the constructs. The reliabilities are within an acceptable range: Cronbach alpha ranged from 0.701 to 0.927. The study will provide an opportunity for researchers to better operationalize RBV theory in studies within the domain of entrepreneurship. This is a fundamental requirement for the ability to test hypotheses derived from RBV in systematic, large scale research studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The common brown leafhopper, Orosius orientalis (Matsumura) (Homoptera: Cicadellidae), previously described as Orosius argentatus (Evans), is an important vector of several viruses and phytoplasmas worldwide. In Australia, phytoplasmas vectored by O. orientalis cause a range of economically important diseases, including legume little leaf (Hutton & Grylls, 1956), tomato big bud (Osmelak, 1986), lucerne witches broom (Helson, 1951), potato purple top wilt (Harding & Teakle, 1985), and Australian lucerne yellows (Pilkington et al., 2004). Orosius orientalis also transmits Tobacco yellow dwarf virus (TYDV; genus Mastrevirus, family Geminiviridae) to beans, causing bean summer death disease (Ballantyne, 1968), and to tobacco, causing tobacco yellow dwarf disease (Hill, 1937, 1941). TYDV has only been recorded in Australia to date. Both diseases result in significant production and quality losses (Ballantyne, 1968; Thomas, 1979; Moran & Rodoni, 1999). Although direct damage caused by leafhopper feeding has been observed, it is relatively minor compared to the losses resulting from disease (P Tr E bicki, unpubl.).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Air pollution is ranked by the World Health Organisation as one of the top ten contributors to the global burden of disease and injury. Exposure to gaseous air pollutants, even at a low level, has been associated with cardiorespiratory diseases (Vedal, Brauer et al. 2003). Most recent epidemiological studies of air pollution have used time-series analyses to explore the relationship between daily mortality or morbidity and daily ambient air pollution concentrations based on the same day or previous days (Hajat, Armstrong et al. 2007). However, most of the previous studies have examined the association between air pollution and health outcomes using air pollution data from a single monitoring site or average values from a few monitoring sites to represent the whole population of the study area. In fact, for a metropolitan city, ambient air pollution levels may differ significantly among the different areas. There is increasing concern that the relationships between air pollution and mortality may vary with geographical area (Chen, Mengersen et al. 2007). Additionally, some studies have indicated that socio-economic status can act as a confounder when investigating the relation between geographical location and health (Scoggins, Kjellstrom et al. 2004). This study examined the spatial variation in the relationship between long-term exposure to gaseous air pollutants (including nitrogen dioxide (NO2), ozone (O3) and sulphur dioxide (SO2)), and cardiorespiratory mortality in Brisbane, Australia, during the period 1996 - 2004.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obesity and type 2 diabetes mellitus (T2D) have reached epidemic proportions in many parts of the world with numbers projected to rise dramatically in coming decades (Wang and Lobstein, 2006; Zaninotto et al., 2006). In Australia, and consistent with much of the developed world, the problem has been described as a ‘juggernaut’ that is out of control (Zimmet and James, 2007). Unfortunately the burgeoning problem of non-communicable diseases, including obesity and T2D, is also impacting developing nations as populations are undergoing a nutrition transition (Caballero, 2005). The increased prevalence of overweight and obesity in children, adolescents and adults in both the developed and developing world is consistent with reductions in all forms of physical activity (Brownson et al., 2005). This brief paper provides an overview of the importance of physical activity and an outline of physical activity intervention studies with particular reference to the growing years. As many interventions studies involving physical activity have been undertaken in the context of childhood obesity prevention (Lobstein et al., 2004), and an increasing proportion of the childhood population is overweight or obese, this is a major focus of discussion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The rise of the ‘practice-led’ research approach has given us a new way of understanding what creative practice in art, design and media can do in the academy and the world— it can materialise new ideas and forms into being as a form of experimental research. Yet, to date, attention around the world, and especially in Australia, has been chiefly directed at the postgraduate research degrees, most notably the PhD or doctoral equivalents. Recent mapping projects and surveys of practice-led research in Australia reveal much about the institutional conditions of higher degree researchers, supervisors, examiners and research training (Baker et al 2009; Evans et al 2003; Dally et al 2004; Paltridge et al 2009; Phillips et al 2009). Given this focus, we might well ask: is the practice-led approach destined to be a part of the higher degree ghetto only, or does it have an afterlife? What is the place of ‘practice-led’ beyond the postgraduate degree? After all postgraduate researchers do not remain postgraduates forever, and perhaps the practice-led approach to research may have benefits in wider university, professional and communal contexts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ecological networks are often represented as utopian webs of green meandering through cities, across states, through regions and even across a country (Erickson, 2006, p.28; Fabos, 2004, p.326; Walmsley, 2006). While this may be an inspiring goal for some in developed countries, the reality may be somewhat different in developing countries. China, in its shift to urbanisation and suburbanisation, is also being persuaded to adjust its planning schemes according to these aspirational representations of green spaces (Yu et al, 2006, p.237; Zhang and Wang, 2006, p.455). The failure of other countries to achieve regional goals of natural and cultural heritage protection on the ground in this way (Peterson et al, 2007; Ryan et al, 2006; von Haaren and Reich, 2006) suggests that there may be flaws in the underpinning concepts that are widely circulated in North American and Western European literature (Jongman et al, 2004; Walmsley, 2006). In China, regional open space networks, regional green infrastructure or regional ecological corridors as we know them in the West, are also likely to be problematic, at least in the foreseeable future. Reasons supporting this view can be drawn from lessons learned from project experience in landscape planning and related fields of study in China and overseas. Implementation of valuable regional green space networks is problematic because: • the concept of region as a spatial unit for planning green space networks is ambiguous and undefinable for practical purposes; • regional green space networks traditionally require top down inter-governmental cooperation and coordination which are generally hampered by inequalities of influence between and within government agencies; • no coordinating body with funding powers exists for regional green space development and infrastructure authorities are still in transition from engineering authorities; • like other infrastructure projects, green space is likely to become a competitive rather than a complementary resource for city governments; • stable long-term management, maintenance and uses of green space networks must fit into a ‘family’ social structure rather than a ‘public good’ social structure, particularly as rural and urban property rights are being re-negotiated with city governments; and • green space provision is a performance indicator of urban improvement in cities within the city hierarchy and remains quantitatively-based (land area, tree number and per capita share) rather than qualitatively-based with local people as the focus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When asymptotic series methods are applied in order to solve problems that arise in applied mathematics in the limit that some parameter becomes small, they are unable to demonstrate behaviour that occurs on a scale that is exponentially small compared to the algebraic terms of the asymptotic series. There are many examples of physical systems where behaviour on this scale has important effects and, as such, a range of techniques known as exponential asymptotic techniques were developed that may be used to examinine behaviour on this exponentially small scale. Many problems in applied mathematics may be represented by behaviour within the complex plane, which may subsequently be examined using asymptotic methods. These problems frequently demonstrate behaviour known as Stokes phenomenon, which involves the rapid switches of behaviour on an exponentially small scale in the neighbourhood of some curve known as a Stokes line. Exponential asymptotic techniques have been applied in order to obtain an expression for this exponentially small switching behaviour in the solutions to orginary and partial differential equations. The problem of potential flow over a submerged obstacle has been previously considered in this manner by Chapman & Vanden-Broeck (2006). By representing the problem in the complex plane and applying an exponential asymptotic technique, they were able to detect the switching, and subsequent behaviour, of exponentially small waves on the free surface of the flow in the limit of small Froude number, specifically considering the case of flow over a step with one Stokes line present in the complex plane. We consider an extension of this work to flow configurations with multiple Stokes lines, such as flow over an inclined step, or flow over a bump or trench. The resultant expressions are analysed, and demonstrate interesting implications, such as the presence of exponentially sub-subdominant intermediate waves and the possibility of trapped surface waves for flow over a bump or trench. We then consider the effect of multiple Stokes lines in higher order equations, particu- larly investigating the behaviour of higher-order Stokes lines in the solutions to partial differential equations. These higher-order Stokes lines switch off the ordinary Stokes lines themselves, adding a layer of complexity to the overall Stokes structure of the solution. Specifically, we consider the different approaches taken by Howls et al. (2004) and Chap- man & Mortimer (2005) in applying exponential asymptotic techniques to determine the higher-order Stokes phenomenon behaviour in the solution to a particular partial differ- ential equation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').