895 resultados para AL-2004-1


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the current managed Everglades system, the pre-drainage, patterned mosaic of sawgrass ridges, sloughs and tree islands has been substantially altered or reduced largely as a result of human alterations to historic ecological and hydrological processes that sustained landscape patterns. The pre-compartmentalization ridge and slough landscape was a mosaic of sloughs, elongated sawgrass ridges (50-200m wide), and tree islands. The ridges and sloughs and tree islands were elongated in the direction of the water flow, with roughly equal area of ridge and slough. Over the past decades, the ridge-slough topographic relief and spatial patterning have degraded in many areas of the Everglades. Nutrient enriched areas have become dominated by Typha with little topographic relief; areas of reduced flow have lost the elongated ridge-slough topography; and ponded areas with excessively long hydroperiods have experienced a decline in ridge prevalence and shape, and in the number of tree islands (Sklar et al. 2004, Ogden 2005).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planktonic foraminifera populations were studied throughout the top 25 meters of the IODP ACEX 302 Hole 4C from the central Arctic Ocean at a resolution varying from 5cm (at the top of the record) to 10cm. Planktonic foraminifera occur in high absolute abundances only in the uppermost fifty centimetres and are dominated by the taxa Neogloboquadrina pachyderma. Except for a few intermittent layers below this level,most samples are barren of calcareous microfossils.Within the topmost sediments, Neogloboquadrina pachyderma specimens present large morphological variability in the shape and number of chambers in the finalwhorl, chamber sphericity, size, and coiling direction. Five morphotypeswere identified among the sinistral (sin.) population (Nps-1 to Nps-5), including a small form (Nps-5) that is similar to a non-encrusted normal form also previously identified in the modern Arctic Ocean watermasses. Twenty five percent of the sinistral population is made up by large specimens (Nps-2, 3, 4), with a maximal mean diameter larger than 250µm. Following observations made in peri-Arctic seas (Hillaire-Marcel et al. 2004, doi:10.1016/j.quascirev.2003.08.006), we propose that occurrence of these large-sized specimens of N. pachyderma (sin.) in the central Arctic Ocean sediments could sign North Atlantic water sub-surface penetration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: El transporte activo (TA) puede ser una oportunidad para incrementar los niveles de actividad física diarios de los niños y adolescentes, además de destacarse como una estrategia práctica, accesible y sostenible a largo plazo. Objetivos: El objetivo del presente estudio es doble: Analizar los patrones de desplazamiento activo en bicicleta al y desde el centro educativo, y b) Identificar los factores asociados al uso de la bicicleta como TA; en una muestra de niños y jóvenes pertenecientes a escuelas oficiales de Bogotá, Colombia. Material y métodos: Se trata de un sub-análisis del estudio FUPRECOL en 8060 niños y adolescentes entre los 9-17 años de edad). El modo de desplazamiento del escolar fue determinado a través de la pregunta: “¿Durante los últimos 7 días, usaste bicicleta para ir al colegio/escuela y volver a la casa?. Dicha respuesta se categorizó en activos “Si” (si se desplazan en bicicleta) y pasivos “No” (si se desplazan en vehículo motorizado). Se midieron parámetros antropométricos de peso, talla y perímetro de cintura. El máximo nivel de estudios alcanzados por la madre/padre (no reporta, primaria o secundaria/técnico o tecnólogo/universitario o postgrado) y la composición del hogar (vive con padre/vive con madre/con ambos padres/con abuelos/otros familiares) se auto-reportó por los padres. Las relaciones entre el TA y los factores anteriormente descritos se analizaron mediante regresión logística binaria. Resultados: El 21,9% del total de la muestra reporta usar la bicicleta como medio de transporte y el 7,9% acumula más de 120 minutos al día. Se observó una mayor probabilidad de usar la bicicleta como medio de desplazamiento activo a la escuela en los varones, en los jóvenes entre 9 y 12 años, y en aquellos cuyo padre/madre reportaron mayor grado académico, es decir, “universitario/postgrado”. 3 Conclusión: Los hallazgos del presente estudio sugieren que es necesario promover el TA desde la niñez, poniendo mayor énfasis en el paso a la adolescencia y en las jóvenes, para así aumentar los niveles diarios de AF de estos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on the volatile and hygroscopic properties of mixed aerosol species. In particular, the influence organic species of varying solubility have upon seed aerosols. Aerosol studies were conducted at the Paul Scherrer Institut Laboratory for Atmospheric Chemistry (PSI-LAC, Villigen, Switzerland) and at the Queensland University of Technology International Laboratory for Air Quality and Health (QUT-ILAQH, Brisbane, Australia). The primary measurement tool employed in this program was the Volatilisation and Hygroscopicity Tandem Differential Mobility Analyser (VHTDMA - Johnson et al. 2004). This system was initially developed at QUT within the ILAQH and was completely re-developed as part of this project (see Section 1.4 for a description of this process). The new VHTDMA was deployed to the PSI-LAC where an analysis of the volatile and hygroscopic properties of ammonium sulphate seeds coated with organic species formed from the photo-oxidation of á-pinene was conducted. This investigation was driven by a desire to understand the influence of atmospherically prevalent organics upon water uptake by material with cloud forming capabilities. Of particular note from this campaign were observed influences of partially soluble organic coatings upon inorganic ammonium sulphate seeds above and below their deliquescence relative humidity (DRH). Above the DRH of the seed increasing the volume fraction of the organic component was shown to reduce the water uptake of the mixed particle. Below the DRH the organic was shown to activate the water uptake of the seed. This was the first time this effect had been observed for á-pinene derived SOA. In contrast with the simulated aerosols generated at the PSI-LAC a case study of the volatile and hygroscopic properties of diesel emissions was undertaken. During this stage of the project ternary nucleation was shown, for the first time, to be one of the processes involved in formation of diesel particulate matter. Furthermore, these particles were shown to be coated with a volatile hydrophobic material which prevented the water uptake of the highly hygroscopic material below. This result was a first and indicated that previous studies into the hygroscopicity of diesel emission had erroneously reported the particles to be hydrophobic. Both of these results contradict the previously upheld Zdanovksii-Stokes-Robinson (ZSR) additive rule for water uptake by mixed species. This is an important contribution as it adds to the weight of evidence that limits the validity of this rule.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principal Topic: It is well known that most new ventures suffer from a significant lack of resources, which increases the risk of failure (Shepherd, Douglas and Shanley, 2000) and makes it difficult to attract stakeholders and financing for the venture (Bhide & Stevenson, 1999). The Resource-Based View (RBV) (Barney, 1991; Wernerfelt, 1984) is a dominant theoretical base increasingly drawn on within Strategic Management. While theoretical contributions applying RBV in the domain of entrepreneurship can arguably be traced back to Penrose (1959), there has been renewed attention recently (e.g. Alvarez & Busenitz, 2001; Alvarez & Barney, 2004). This said, empirical work is in its infancy. In part, this may be due to a lack of well developed measuring instruments for testing ideas derived from RBV. The purpose of this study is to develop a measurement scales that can serve to assist such empirical investigations. In so doing we will try to overcome three deficiencies in current empirical measures used for the application of RBV to the entrepreneurship arena. First, measures for resource characteristics and configurations associated with typical competitive advantages found in entrepreneurial firms need to be developed. These include such things as alertness and industry knowledge (Kirzner, 1973), flexibility (Ebben & Johnson, 2005), strong networks (Lee et al., 2001) and within knowledge intensive contexts, unique technical expertise (Wiklund and Shepard, 2003). Second, the RBV has the important limitations of being relatively static and modelled on large, established firms. In that context, traditional RBV focuses on competitive advantages. However, newly established firms often face disadvantages, especially those associated with the liabilities of newness (Aldrich & Auster, 1986). It is therefore important in entrepreneurial contexts to expand to an investigation of responses to competitive disadvantage through an RBV lens. Conversely, recent research has suggested that resource constraints actually have a positive effect on firm growth and performance under some circumstances (e.g., George, 2005; Katila & Shane, 2005; Mishina et al., 2004; Mosakowski, 2002; cf. also Baker & Nelson, 2005). Third, current empirical applications of RBV measured levels or amounts of particular resources available to a firm. They infer that these resources deliver firms competitive advantage by establishing a relationship between these resource levels and performance (e.g. via regression on profitability). However, there is the opportunity to directly measure the characteristics of resource configurations that deliver competitive advantage, such as Barney´s well known VRIO (Valuable, Rare, Inimitable and Organized) framework (Barney, 1997). Key Propositions and Methods: The aim of our study is to develop and test scales for measuring resource advantages (and disadvantages) and inimitability for entrepreneurial firms. The study proceeds in three stages. The first stage developed our initial scales based on earlier literature. Where possible, we adapt scales based on previous work. The first block of the scales related to the level of resource advantages and disadvantages. Respondents were asked the degree to which each resource category represented an advantage or disadvantage relative to other businesses in their industry on a 5 point response scale: Major Disadvantage, Slight Disadvantage, No Advantage or Disadvantage, Slight Advantage and Major Advantage. Items were developed as follows. Network capabilities (3 items) were adapted from (Madsen, Alsos, Borch, Ljunggren & Brastad, 2006). Knowledge resources marketing expertise / customer service (3 items) and technical expertise (3 items) were adapted from Wiklund and Shepard (2003). flexibility (2 items), costs (4 items) were adapted from JIBS B97. New scales were developed for industry knowledge / alertness (3 items) and product / service advantages. The second block asked the respondent to nominate the most important resource advantage (and disadvantage) of the firm. For the advantage, they were then asked four questions to determine how easy it would be for other firms to imitate and/or substitute this resource on a 5 point likert scale. For the disadvantage, they were asked corresponding questions related to overcoming this disadvantage. The second stage involved two pre-tests of the instrument to refine the scales. The first was an on-line convenience sample of 38 respondents. The second pre-test was a telephone interview with a random sample of 31 Nascent firms and 47 Young firms (< 3 years in operation) generated using a PSED method of randomly calling households (Gartner et al. 2004). Several items were dropped or reworded based on the pre-tests. The third stage (currently in progress) is part of Wave 1 of CAUSEE (Nascent Firms) and FEDP (Young Firms), a PSED type study being conducted in Australia. The scales will be tested and analysed with a random sample of approximately 700 Nascent and Young firms respectively. In addition, a judgement sample of approximately 100 high potential businesses in each category will be included. Findings and Implications: The paper will report the results of the main study (stage 3 – currently data collection is in progress) will allow comparison of the level of resource advantage / disadvantage across various sub-groups of the population. Of particular interest will be a comparison of the high potential firms with the random sample. Based on the smaller pre-tests (N=38 and N=78) the factor structure of the items confirmed the distinctiveness of the constructs. The reliabilities are within an acceptable range: Cronbach alpha ranged from 0.701 to 0.927. The study will provide an opportunity for researchers to better operationalize RBV theory in studies within the domain of entrepreneurship. This is a fundamental requirement for the ability to test hypotheses derived from RBV in systematic, large scale research studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The common brown leafhopper, Orosius orientalis (Matsumura) (Homoptera: Cicadellidae), previously described as Orosius argentatus (Evans), is an important vector of several viruses and phytoplasmas worldwide. In Australia, phytoplasmas vectored by O. orientalis cause a range of economically important diseases, including legume little leaf (Hutton & Grylls, 1956), tomato big bud (Osmelak, 1986), lucerne witches broom (Helson, 1951), potato purple top wilt (Harding & Teakle, 1985), and Australian lucerne yellows (Pilkington et al., 2004). Orosius orientalis also transmits Tobacco yellow dwarf virus (TYDV; genus Mastrevirus, family Geminiviridae) to beans, causing bean summer death disease (Ballantyne, 1968), and to tobacco, causing tobacco yellow dwarf disease (Hill, 1937, 1941). TYDV has only been recorded in Australia to date. Both diseases result in significant production and quality losses (Ballantyne, 1968; Thomas, 1979; Moran & Rodoni, 1999). Although direct damage caused by leafhopper feeding has been observed, it is relatively minor compared to the losses resulting from disease (P Tr E bicki, unpubl.).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Air pollution is ranked by the World Health Organisation as one of the top ten contributors to the global burden of disease and injury. Exposure to gaseous air pollutants, even at a low level, has been associated with cardiorespiratory diseases (Vedal, Brauer et al. 2003). Most recent epidemiological studies of air pollution have used time-series analyses to explore the relationship between daily mortality or morbidity and daily ambient air pollution concentrations based on the same day or previous days (Hajat, Armstrong et al. 2007). However, most of the previous studies have examined the association between air pollution and health outcomes using air pollution data from a single monitoring site or average values from a few monitoring sites to represent the whole population of the study area. In fact, for a metropolitan city, ambient air pollution levels may differ significantly among the different areas. There is increasing concern that the relationships between air pollution and mortality may vary with geographical area (Chen, Mengersen et al. 2007). Additionally, some studies have indicated that socio-economic status can act as a confounder when investigating the relation between geographical location and health (Scoggins, Kjellstrom et al. 2004). This study examined the spatial variation in the relationship between long-term exposure to gaseous air pollutants (including nitrogen dioxide (NO2), ozone (O3) and sulphur dioxide (SO2)), and cardiorespiratory mortality in Brisbane, Australia, during the period 1996 - 2004.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obesity and type 2 diabetes mellitus (T2D) have reached epidemic proportions in many parts of the world with numbers projected to rise dramatically in coming decades (Wang and Lobstein, 2006; Zaninotto et al., 2006). In Australia, and consistent with much of the developed world, the problem has been described as a ‘juggernaut’ that is out of control (Zimmet and James, 2007). Unfortunately the burgeoning problem of non-communicable diseases, including obesity and T2D, is also impacting developing nations as populations are undergoing a nutrition transition (Caballero, 2005). The increased prevalence of overweight and obesity in children, adolescents and adults in both the developed and developing world is consistent with reductions in all forms of physical activity (Brownson et al., 2005). This brief paper provides an overview of the importance of physical activity and an outline of physical activity intervention studies with particular reference to the growing years. As many interventions studies involving physical activity have been undertaken in the context of childhood obesity prevention (Lobstein et al., 2004), and an increasing proportion of the childhood population is overweight or obese, this is a major focus of discussion.