164 resultados para Aggregate ichthyofauna
Resumo:
Self-reported health status measures are generally used to analyse Social Security Disability Insurance's (SSDI) application and award decisions as well as the relationship between its generosity and labour force participation. Due to endogeneity and measurement error, the use of self-reported health and disability indicators as explanatory variables in economic models is problematic. We employ county-level aggregate data, instrumental variables and spatial econometric techniques to analyse the determinants of variation in SSDI rates and explicitly account for the endogeneity and measurement error of the self-reported disability measure. Two surprising results are found. First, it is shown that measurement error is the dominating source of the bias and that the main source of measurement error is sampling error. Second, results suggest that there may be synergies for applying for SSDI when the disabled population is larger. © 2011 Taylor & Francis.
Resumo:
A series of large-scale photographic collages and videoworks installed in the 2010 The Beauty Of Distance: Songs of Survival in a Precarious Age, Sydney Biennale Cockatoo Island, Sydney (cat.)The work addresses her ongoing interest in feminist strategies for negotiating individual and collective identities, equality,and social activism.
Resumo:
Though difficult, the study of gene-environment interactions in multifactorial diseases is crucial for interpreting the relevance of non-heritable factors and prevents from overlooking genetic associations with small but measurable effects. We propose a "candidate interactome" (i.e. a group of genes whose products are known to physically interact with environmental factors that may be relevant for disease pathogenesis) analysis of genome-wide association data in multiple sclerosis. We looked for statistical enrichment of associations among interactomes that, at the current state of knowledge, may be representative of gene-environment interactions of potential, uncertain or unlikely relevance for multiple sclerosis pathogenesis: Epstein-Barr virus, human immunodeficiency virus, hepatitis B virus, hepatitis C virus, cytomegalovirus, HHV8-Kaposi sarcoma, H1N1-influenza, JC virus, human innate immunity interactome for type I interferon, autoimmune regulator, vitamin D receptor, aryl hydrocarbon receptor and a panel of proteins targeted by 70 innate immune-modulating viral open reading frames from 30 viral species. Interactomes were either obtained from the literature or were manually curated. The P values of all single nucleotide polymorphism mapping to a given interactome were obtained from the last genome-wide association study of the International Multiple Sclerosis Genetics Consortium & the Wellcome Trust Case Control Consortium, 2. The interaction between genotype and Epstein Barr virus emerges as relevant for multiple sclerosis etiology. However, in line with recent data on the coexistence of common and unique strategies used by viruses to perturb the human molecular system, also other viruses have a similar potential, though probably less relevant in epidemiological terms. © 2013 Mechelli et al.
Resumo:
This paper provides an empirical estimation of energy efficiency and other proximate factors that explain energy intensity in Australia for the period 1978-2009. The analysis is performed by decomposing the changes in energy intensity by means of energy efficiency, fuel mix and structural changes using sectoral and sub-sectoral levels of data. The results show that the driving forces behind the decrease in energy intensity in Australia are efficiency effect and sectoral composition effect, where the former is found to be more prominent than the latter. Moreover, the favourable impact of the composition effect has slowed consistently in recent years. A perfect positive association characterizes the relationship between energy intensity and carbon intensity in Australia. The decomposition results indicate that Australia needs to improve energy efficiency further to reduce energy intensity and carbon emissions. © 2012 Elsevier Ltd.
Resumo:
Online technological advances are pioneering the wider distribution of geospatial information for general mapping purposes. The use of popular web-based applications, such as Google Maps, is ensuring that mapping based applications are becoming commonplace amongst Internet users which has facilitated the rapid growth of geo-mashups. These user generated creations enable Internet users to aggregate and publish information over specific geographical points. This article identifies privacy invasive geo-mashups that involve the unauthorized use of personal information, the inadvertent disclosure of personal information and invasion of privacy issues. Building on Zittrain’s Privacy 2.0, the author contends that first generation information privacy laws, founded on the notions of fair information practices or information privacy principles, may have a limited impact regarding the resolution of privacy problems arising from privacy invasive geo-mashups. Principally because geo-mashups have different patterns of personal information provision, collection, storage and use that reflect fundamental changes in the Web 2.0 environment. The author concludes by recommending embedded technical and social solutions to minimize the risks arising from privacy invasive geo-mashups that could lead to the establishment of guidelines for the general protection of privacy in geo-mashups.
Resumo:
In the globalizing world, knowledge and information (and the social and technological settings for their production and communication) are now seen as keys to economic prosperity. The economy of a knowledge city creates value-added products using research, technology, and brainpower. The social benefit of knowledge-based urban development (KBUD); however, extends beyond aggregate economic growth.
Resumo:
Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
Engineering assets such as roads, rail, bridges and other forms of public works are vital to the effective functioning of societies {Herder, 2006 #128}. Proficient provision of this physical infrastructure is therefore one of the key activities of government {Lædre, 2006 #123}. In order to ensure engineering assets are procured and maintained on behalf of citizens, government needs to devise the appropriate policy and institutional architecture for this purpose. The changing institutional arrangements around the procurement of engineering assets are the focus of this paper. The paper describes and analyses the transition to new, more collaborative forms of procurement arrangements which are becoming increasingly prevalent in Australia and other OECD countries. Such fundamental shifts from competitive to more collaborative approaches to project governance can be viewed as a major transition in procurement system arrangements. In many ways such changes mirror the shift from New Public Management, with its emphasis on the use of market mechanisms to achieve efficiencies {Hood, 1991 #166}, towards more collaborative approaches to service delivery, such as those under network governance arrangements {Keast, 2007 #925}. However, just as traditional forms of procurement in a market context resulted in unexpected outcomes for industry, such as a fragmented industry afflicted by chronic litigation {Dubois, 2002 #9}, the change to more collaborative forms of procurement is unlikely to be a panacea to the problems of procurement, and may well also have unintended consequences. This paper argues that perspectives from complex adaptive systems (CAS) theory can contribute to the theory and practice of managing system transitions. In particular the concept of emergence provides a key theoretical construct to understand the aggregate effect that individual project governance arrangements can have upon the structure of specific industries, which in turn impact individual projects. Emergence is understood here as the macro structure that emerges out of the interaction of agents in the system {Holland, 1998 #100; Tang, 2006 #51}.
Resumo:
SEM observations of the aqueous suspensions of kaolinite from Birdwood (South Australia) and Georgia (USA) show noticeable differences in number of physical behaviour which has been explained by different microstructure constitution.. Birdwood kaolinite dispersion gels are observed at very low solid loadings in comparison with Georgia KGa-1 kaolinite dispersions which remain fluid at higher solids loading. To explain this behaviour, the specific particle interactions of Birdwood kaolinite, different from interaction in Georgia kaolinite have been proposed. These interactions may be brought about by the presence of nano-bubbles on clay crystal edges and may force clay particles to aggregate by bubble coalescence. This explains the predominance of stair step edge-edge like (EE) contacts in suspension of Birdwood kaolinite. Such EE linked particles build long strings that form a spacious cell structure. Hydrocarbon contamination of colloidal kaolinite particles and low aspect ratio are discussed as possible explanations of this unusual behaviour of Birdwood kaolinite. In Georgia KGa-1 kaolinite dispersions instead of EE contact between platelets displayed in Birdwood kaolinite, most particles have edge to face (EF) contacts building a cardhouse structure. Such an arrangement is much less voluminous in comparison with the Birdwood kaolinite cellular honeycomb structure observed previously in smectite aqueous suspensions. Such structural characteristics of KGa-1 kaolinite particles enable higher solid volume fractions pulps to form before significantly networked gel consistency is attained.
Resumo:
This paper reports on the early stages of a design experiment in educational assessment that challenges the dichotomous legacy evident in many assessment activities. Combining social networking technologies with the sociology of education the paper proposes that assessment activities are best understood as a negotiable field of exchange. In this design experiment students, peers and experts engage in explicit, "front-end" assessment (Wyatt-Smith, 2008) to translate holistic judgments into institutional, and potentiality economic capital without adhering to long lists of pre-set criteria. This approach invites participants to use social networking technologies to judge creative works using scatter graphs, keywords and tag clouds. In doing so assessors will refine their evaluative expertise and negotiate the characteristics of creative works from which criteria will emerge (Sadler, 2008). The real-time advantages of web-based technologies will aggregate, externalise and democratise this transparent method of assessment for most, if not all, creative works that can be represented in a digital format.
Resumo:
Exposure to particles emitted by cooking activities may be responsible for a variety of respiratory health effects. However, the relationship between these exposures and their subsequent effects on health cannot be evaluated without understanding the properties of the emitted aerosol or the main parameters that influence particle emissions during cooking. Whilst traffic-related emissions, stack emissions and ultrafine particle concentrations (UFP, diameter < 100 nm) in urban ambient air have been widely investigated for many years, indoor exposure to UFPs is a relatively new field and in order to evaluate indoor UFP emissions accurately, it is vital to improve scientific understanding of the main parameters that influence particle number, surface area and mass emissions. The main purpose of this study was to characterise the particle emissions produced during grilling and frying as a function of the food, source, cooking temperature and type of oil. Emission factors, along with particle number concentrations and size distributions were determined in the size range 0.006-20 m using a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). An infrared camera was used to measure the temperature field. Overall, increased emission factors were observed to be a function of increased cooking temperatures. Cooking fatty foods also produced higher particle emission factors than vegetables, mainly in terms of mass concentration, and particle emission factors also varied significantly according to the type of oil used.
Resumo:
Objective: In the majority of exercise intervention studies, the aggregate reported weight loss is often small. The efficacy of exercise as a weight loss tool remains in question. The aim of the present study was to investigate the variability in appetite and body weight when participants engaged in a supervised and monitored exercise programme. ---------- Design: Fifty-eight obese men and women (BMI = 31·8 ± 4·5 kg/m2) were prescribed exercise to expend approximately 2092 kJ (500 kcal) per session, five times a week at an intensity of 70 % maximum heart rate for 12 weeks under supervised conditions in the research unit. Body weight and composition, total daily energy intake and various health markers were measured at weeks 0, 4, 8 and 12. ---------- Results: Mean reduction in body weight (3·2 ± 1·98 kg) was significant (P < 0·001); however, there was large individual variability (−14·7 to +2·7 kg). This large variability could be largely attributed to the differences in energy intake over the 12-week intervention. Those participants who failed to lose meaningful weight increased their food intake and reduced intake of fruits and vegetables. ---------- Conclusion: These data have demonstrated that even when exercise energy expenditure is high, a healthy diet is still required for weight loss to occur in many people.
Resumo:
To understand the diffusion of high technology products such as PCs, digital cameras and DVD players it is necessary to consider the dynamics of successive generations of technology. From the consumer’s perspective, these technology changes may manifest themselves as either a new generation product substituting for the old (for instance digital cameras) or as multiple generations of a single product (for example PCs). To date, research has been confined to aggregate level sales models. These models consider the demand relationship between one generation of a product and a successor generation. However, they do not give insights into the disaggregate-level decisions by individual households – whether to adopt the newer generation, and if so, when. This paper makes two contributions. It is the first large scale empirical study to collect household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in contrast to traditional analysis in diffusion research that conceptualizes technology substitution as an “adoption of innovation” type process, we propose that from a consumer’s perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing generation I product with generation II). Key Propositions In some cases, successive generations are clear “substitutes” for the earlier generation (e.g. PCs Pentium I to II to III ). More commonly the new generation II technology is a “partial substitute” for existing generation I technology (e.g. DVD players and VCRs). Some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Moreover, drawing on adoption theory consumer innovativeness is the most important consumer characteristic for adoption timing of new products. Hence, we hypothesize consumer innovativeness to influence the timing of both additional and substitute generation II purchases but to have a stronger impact on additional generation II purchases. We further propose that substitute generation II purchases act partially as a replacement purchase for the generation I product. Thus, we hypothesize that households with older generation I products will make substitute generation II purchases earlier. Methods We employ Cox hazard modeling to study factors influencing the timing of a household’s adoption of generation II products. A separate hazard model is conducted for additional and substitute purchases. The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include size and income of household, age and education of decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases and substitute purchases. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD players and a strong influence for PCs/notebooks. Yet, also as hypothesized, there was no influence on additional purchases. This implies that there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. For substitute purchases, product age is a key driver. Therefore marketers of high technology products can utilize data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
Objective We aimed to predict sub-national spatial variation in numbers of people infected with Schistosoma haematobium, and associated uncertainties, in Burkina Faso, Mali and Niger, prior to implementation of national control programmes. Methods We used national field survey datasets covering a contiguous area 2,750 × 850 km, from 26,790 school-aged children (5–14 years) in 418 schools. Bayesian geostatistical models were used to predict prevalence of high and low intensity infections and associated 95% credible intervals (CrI). Numbers infected were determined by multiplying predicted prevalence by numbers of school-aged children in 1 km2 pixels covering the study area. Findings Numbers of school-aged children with low-intensity infections were: 433,268 in Burkina Faso, 872,328 in Mali and 580,286 in Niger. Numbers with high-intensity infections were: 416,009 in Burkina Faso, 511,845 in Mali and 254,150 in Niger. 95% CrIs (indicative of uncertainty) were wide; e.g. the mean number of boys aged 10–14 years infected in Mali was 140,200 (95% CrI 6200, 512,100). Conclusion National aggregate estimates for numbers infected mask important local variation, e.g. most S. haematobium infections in Niger occur in the Niger River valley. Prevalence of high-intensity infections was strongly clustered in foci in western and central Mali, north-eastern and northwestern Burkina Faso and the Niger River valley in Niger. Populations in these foci are likely to carry the bulk of the urinary schistosomiasis burden and should receive priority for schistosomiasis control. Uncertainties in predicted prevalence and numbers infected should be acknowledged and taken into consideration by control programme planners.