773 resultados para IUCN categories and criteria
Resumo:
Tese de mestrado. Biologia (Ecologia e Gestão Ambiental). Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Recessions impact the retail sector and as such research into consumer decision making during such times is imperative. In response to this, our study takes an innovative approach to examining how the perceived importance of retail store categories in a shopping mall influences the relationship between consumers' shopping attitudes and likelihood of purchasing in those categories during a recession. The overall findings show the importance of a product category to a consumer, which is often overlooked, has a strong explanatory influence on consumer purchase intentions for that specific retail store categories in a shopping mall under recession conditions. Findings also show that for consumers’ who have altered their shopping behaviour the perceived importance of a retail store category fully mediates the relationship for: Majors, Leisure, Food Catered and Mini Majors categories, and partial mediation for Apparel. Importance has no mediating effect for: Food Retail, General Retail, Mobile Phone Services, Home wares, and Retail Services. Our study makes a key contribution to the retail management literature with the findings suggesting that redefining and articulating the importance of the value offering for specific retail store categories can help reduce the impact of changes in consumers' recessionary shopping intentions across the mall tenant mix. Such actions can then help preserve the image of the shopping mall in the minds of the consumers when the economic recovery begins.
Resumo:
More than 1200 wheat and 120 barley experiments conducted in Australia to examine yield responses to applied nitrogen (N) fertiliser are contained in a national database of field crops nutrient research (BFDC National Database). The yield responses are accompanied by various pre-plant soil test data to quantify plant-available N and other indicators of soil fertility status or mineralisable N. A web application (BFDC Interrogator), developed to access the database, enables construction of calibrations between relative crop yield ((Y0/Ymax) × 100) and N soil test value. In this paper we report the critical soil test values for 90% RY (CV90) and the associated critical ranges (CR90, defined as the 70% confidence interval around that CV90) derived from analysis of various subsets of these winter cereal experiments. Experimental programs were conducted throughout Australia’s main grain-production regions in different eras, starting from the 1960s in Queensland through to Victoria during 2000s. Improved management practices adopted during the period were reflected in increasing potential yields with research era, increasing from an average Ymax of 2.2 t/ha in Queensland in the 1960s and 1970s, to 3.4 t/ha in South Australia (SA) in the 1980s, to 4.3 t/ha in New South Wales (NSW) in the 1990s, and 4.2 t/ha in Victoria in the 2000s. Various sampling depths (0.1–1.2 m) and methods of quantifying available N (nitrate-N or mineral-N) from pre-planting soil samples were used and provided useful guides to the need for supplementary N. The most regionally consistent relationships were established using nitrate-N (kg/ha) in the top 0.6 m of the soil profile, with regional and seasonal variation in CV90 largely accounted for through impacts on experimental Ymax. The CV90 for nitrate-N within the top 0.6 m of the soil profile for wheat crops increased from 36 to 110 kg nitrate-N/ha as Ymax increased over the range 1 to >5 t/ha. Apparent variation in CV90 with seasonal moisture availability was entirely consistent with impacts on experimental Ymax. Further analyses of wheat trials with available grain protein (~45% of all experiments) established that grain yield and not grain N content was the major driver of crop N demand and CV90. Subsets of data explored the impact of crop management practices such as crop rotation or fallow length on both pre-planting profile mineral-N and CV90. Analyses showed that while management practices influenced profile mineral-N at planting and the likelihood and size of yield response to applied N fertiliser, they had no significant impact on CV90. A level of risk is involved with the use of pre-plant testing to determine the need for supplementary N application in all Australian dryland systems. In southern and western regions, where crop performance is based almost entirely on in-crop rainfall, this risk is offset by the management opportunity to split N applications during crop growth in response to changing crop yield potential. In northern cropping systems, where stored soil moisture at sowing is indicative of minimum yield potential, erratic winter rainfall increases uncertainty about actual yield potential as well as reducing the opportunity for effective in-season applications.
Resumo:
Soil testing is the most widely used tool to predict the need for fertiliser phosphorus (P) application to crops. This study examined factors affecting critical soil P concentrations and confidence intervals for wheat and barley grown in Australian soils by interrogating validated data from 1777 wheat and 150 barley field treatment series now held in the BFDC National Database. To narrow confidence intervals associated with estimated critical P concentrations, filters for yield, crop stress, or low pH were applied. Once treatment series with low yield (<1 t/ha), severe crop stress, or pHCaCl2 <4.3 were screened out, critical concentrations were relatively insensitive to wheat yield (>1 t/ha). There was a clear increase in critical P concentration from early trials when full tillage was common compared with those conducted in 1995–2011, which corresponds to a period of rapid shift towards adoption of minimum tillage. For wheat, critical Colwell-P concentrations associated with 90 or 95% of maximum yield varied among Australian Soil Classification (ASC) Orders and Sub-orders: Calcarosol, Chromosol, Kandosol, Sodosol, Tenosol and Vertosol. Soil type, based on ASC Orders and Sub-orders, produced critical Colwell-P concentrations at 90% of maximum relative yield from 15 mg/kg (Grey Vertosol) to 47 mg/kg (Supracalcic Calcarosols), with other soils having values in the range 19–27 mg/kg. Distinctive differences in critical P concentrations were evident among Sub-orders of Calcarosols, Chromosols, Sodosols, Tenosols, and Vertosols, possibly due to differences in soil properties related to P sorption. However, insufficient data were available to develop a relationship between P buffering index (PBI) and critical P concentration. In general, there was no evidence that critical concentrations for barley would be different from those for wheat on the same soils. Significant knowledge gaps to fill to improve the relevance and reliability of soil P testing for winter cereals were: lack of data for oats; the paucity of treatment series reflecting current cropping practices, especially minimum tillage; and inadequate metadata on soil texture, pH, growing season rainfall, gravel content, and PBI. The critical concentrations determined illustrate the importance of recent experimental data and of soil type, but also provide examples of interrogation pathways into the BFDC National Database to extract locally relevant critical P concentrations for guiding P fertiliser decision-making in wheat and barley.
Resumo:
Socioeconomic factors have long been incorporated into environmental research to examine the effects of human dimensions on coastal natural resources. Boyce (1994) proposed that inequality is a cause of environmental degradation and the Environmental Kuznets Curve is a proposed relationship that income or GDP per capita is related with initial increases in pollution followed by subsequent decreases (Torras and Boyce, 1998). To further examine this relationship within the CAMA counties, the emission of sulfur dioxide and nitrogen oxides, as measured by the EPA in terms of tons emitted, the Gini Coefficient, and income per capita were examined for the year of 1999. A quadratic regression was utilized and the results did not indicate that inequality, as measured by the Gini Coefficient, was significantly related to the level of criteria air pollutants within each county. Additionally, the results did not indicate the existence of the Environmental Kuznets Curve. Further analysis of spatial autocorrelation using ArcMap 9.2, found a high level of spatial autocorrelation among pollution emissions indicating that relation to other counties may be more important to the level of sulfur dioxide and nitrogen oxide emissions than income per capita and inequality. Lastly, the paper concludes that further Environmental Kuznets Curve and income inequality analyses in regards to air pollutant levels incorporate spatial patterns as well as other explanatory variables. (PDF contains 4 pages)
Resumo:
This paper presents nine tenets for management as formulated in the literature in recent decades. These tenets, and the principles behind them, form the foundation for systemic management. All tenets are interrelated and far from mutually exclusive or discrete. When we consider them seriously and simultaneously, these tenets expose serious flaws of conventional resource management and define systemic management. Systemic management requires that we manage inclusively and avoid restricting management to any particular interaction between humans and other elements of nature. The management tenets presented here are considered with particular attention to the interrelationships among both the tenets and principles upon which they are based. The case is made that the tenets are inseparable and should be applied collectively. Combined consideration of the tenets clarifies the role of science, contributes to progress in defining management, and leads to the development of ways we can avoid mistakes of past management. Systemic management emerges as at least one form of management that will consistently account for and apply to the complexities of nature.
Resumo:
In many passerine species, males sing more than one distinct song type. Commonly, songs are assigned to different song types or song categories based on phonological and syntactical dissimilarities. However, temporal aspects, such as song length and song rate, also need to be considered to understand the possible functions of different songs. Common nightingales (Luscinia megarhynchos) have large vocal repertoires of different song types but their songs additionally can be grouped into two distinct categories (particular groups of song types): whistle songs and nonwhistle songs. Whistle songs are hypothesised to be important to attract migrating females. We studied temporal properties of whistle songs and nonwhistle songs and examined the relationship between those song parameters and song output parameters, such as song rate and song length. To investigate how song parameters vary among males, we calculated the coefficients of variation for different song traits. We found that the variation in the proportion of whistle songs was significantly higher among males than variation in other song parameters. Furthermore, the proportion of whistle songs was negatively correlated with other sona output patterns. These findings suggest that the production of whistle songs might be constrained and/or that whistle songs and their succeeding pauses may act as a functional unit in communication.
Resumo:
Rather than treating conservative Protestantism as a homogenous phenomenon, recent literature has underlined the importance of disaggregating this group to illuminate important attitudinal and behavioral differences between conservative Protestants. However, the methods used to empirically operationalize conservative Protestantism have not always been able to capture variations within the groupings. Based on analysis of the 2004 Northern Ireland Life and Times Survey, we argue that religious self-identification is a more useful way of analyzing conservative Protestant subgroups than denomination or religious belief. We show that many of these identifications are overlapping, rather than stand-alone, religious group identifications. Moreover, the identification category of born-again has seldom been included in surveys. We find having a born-again identification to be a better predictor than the more frequently asked fundamentalist and evangelical categories of the religious and social beliefs that are seen as indicative of conservative Protestantism.
Resumo:
A diagnostic system for ICD-11 is proposed which commences with broad reorganization and simplification of the current categories and the use of clinically relevant specifiers. Such changes have implications for the positioning of diagnostic groups and lead to a range of possibilities for improving terminology and the juxtaposition of individual conditions. The development of ICD-11 provides the first opportunity in almost two decades to improve the validity and reliability of the international classification system. Widespread change in broad categories and criteria cannot be justified by research that has emerged since the last revision. It would also be disruptive to clinical practice and might devalue past research work. However, the case for reorganization of the categories is stronger and has recently been made by an eminent international group of researchers (Andrews et al., 2009). A simpler, interlinked diagnostic system is proposed here which is likely to have fewer categories than its predecessor. There are major advantages of such a system for clinical practice and research and it could also produce much needed simplification for primary care (Gask et al., 2008) and the developing world (Wig, 1990; Kohn et al., 2004).
Resumo:
Most studies of conceptual knowledge in the brain focus on a narrow range of concrete conceptual categories, rely on the researchers' intuitions about which object belongs to these categories, and assume a broadly taxonomic organization of knowledge. In this fMRI study, we focus on concepts with a variety of concreteness levels; we use a state of the art lexical resource (WordNet 3.1) as the source for a relatively large number of category distinctions and compare a taxonomic style of organization with a domain-based model (associating concepts with scenarios). Participants mentally simulated situations associated with concepts when cued by text stimuli. Using multivariate pattern analysis, we find evidence that all Taxonomic categories and Domains can be distinguished from fMRI data and also observe a clear concreteness effect: Tools and Locations can be reliably predicted for unseen participants, but less concrete categories (e.g., Attributes, Communications, Events, Social Roles) can only be reliably discriminated within participants. A second concreteness effect relates to the interaction of Domain and Taxonomic category membership: Domain (e.g., relation to Law vs. Music) can be better predicted for less concrete categories. We repeated the analysis within anatomical regions, observing discrimination between all/most categories in the left middle occipital and temporal gyri, and more specialized discrimination for concrete categories Tool and Location in the left precentral and fusiform gyri, respectively. Highly concrete/abstract Taxonomic categories and Domain were segregated in frontal regions. We conclude that both Taxonomic and Domain class distinctions are relevant for interpreting neural structuring of concrete and abstract concepts.
Resumo:
This study assesses the current state of adult skeletal age-at-death estimation in biological anthropology through analysis of data published in recent research articles from three major anthropological and archaeological journals (2004–2009). The most commonly used adult ageing methods, age of ‘adulthood’, age ranges and the maximum age reported for ‘mature’ adults were compared. The results showed a wide range of variability in the age at which individuals were determined to be adult (from 14 to 25 years), uneven age ranges, a lack of standardisation in the use of descriptive age categories and the inappropriate application of some ageing methods for the sample being examined. Such discrepancies make comparisons between skeletal samples difficult, while the inappropriate use of some techniques make the resultant age estimations unreliable. At a time when national and even global comparisons of past health are becoming prominent, standardisation in the terminology and age categories used to define adults within each sample is fundamental. It is hoped that this research will prompt discussions in the osteological community (both nationally and internationally) about what defines an ‘adult’, how to standardise the age ranges that we use and how individuals should be assigned to each age category. Skeletal markers have been proposed to help physically identify ‘adult’ individuals.
Resumo:
Previous studies have demonstrated that there is a tight link between grammatical concepts and cognitive preferences in monolingual speakers (Lucy 1992, Lucy & Gaskins 2003, Imai & Gentner 1997, Imai & Mazuka 2003). Recent research has also shown that bilinguals with languages that differ in their concepts may shift their cognitive preferences as a function of their proficiency (Athanasopoulos, 2006) or cultural immersion (Cook, Bassetti, Kasai, Sasaki, & Takahashi, 2006). The current short paper assesses the relative impact of each of these variables, and furthermore asks whether bilinguals alternate between two distinct cognitive representations of language-specific concepts depending on the language used in the experiment. Results from an object classification task showed that Japanese–English bilinguals shifted their behaviour towards the second language (L2) pattern primarily as a function of their L2 proficiency, while cultural immersion and language of instruction played a minimal role. These findings suggest that acquisition of novel grammatical categories leads to cognitive restructuring in the bilingual mind and have implications for the relationship between language and cognitive processing.