985 resultados para lexicon from Biology
Resumo:
Knowledge of particle emission characteristics associated with forest fires and in general, biomass burning, is becoming increasingly important due to the impact of these emissions on human health. Of particular importance is developing a better understanding of the size distribution of particles generated from forest combustion under different environmental conditions, as well as provision of emission factors for different particle size ranges. This study was aimed at quantifying particle emission factors from four types of wood found in South East Queensland forests: Spotted Gum (Corymbia citriodora), Red Gum (Eucalypt tereticornis), Blood Gum (Eucalypt intermedia), and Iron bark (Eucalypt decorticans); under controlled laboratory conditions. The experimental set up included a modified commercial stove connected to a dilution system designed for the conditions of the study. Measurements of particle number size distribution and concentration resulting from the burning of woods with a relatively homogenous moisture content (in the range of 15 to 26 %) and for different rates of burning were performed using a TSI Scanning Mobility Particle Sizer (SMPS) in the size range from 10 to 600 nm and a TSI Dust Trak for PM2.5. The results of the study in terms of the relationship between particle number size distribution and different condition of burning for different species show that particle number emission factors and PM2.5 mass emission factors depend on the type of wood and the burning rate; fast burning or slow burning. The average particle number emission factors for fast burning conditions are in the range of 3.3 x 1015 to 5.7 x 1015 particles/kg, and for PM2.5 are in the range of 139 to 217 mg/kg.
Resumo:
Quantitative behaviour analysis requires the classification of behaviour to produce the basic data. In practice, much of this work will be performed by multiple observers, and maximising inter-observer consistency is of particular importance. Another discipline where consistency in classification is vital is biological taxonomy. A classification tool of great utility, the binary key, is designed to simplify the classification decision process and ensure consistent identification of proper categories. We show how this same decision-making tool - the binary key - can be used to promote consistency in the classification of behaviour. The construction of a binary key also ensures that the categories in which behaviour is classified are complete and non-overlapping. We discuss the general principles of design of binary keys, and illustrate their construction and use with a practical example from education research.
Resumo:
Introduction Many bilinguals will have had the experience of unintentionally reading something in a language other than the intended one (e.g. MUG to mean mosquito in Dutch rather than a receptacle for a hot drink, as one of the possible intended English meanings), of finding themselves blocked on a word for which many alternatives suggest themselves (but, somewhat annoyingly, not in the right language), of their accent changing when stressed or tired and, occasionally, of starting to speak in a language that is not understood by those around them. These instances where lexical access appears compromised and control over language behavior is reduced hint at the intricate structure of the bilingual lexical architecture and the complexity of the processes by which knowledge is accessed and retrieved. While bilinguals might tend to blame word finding and other language problems on their bilinguality, these difficulties per se are not unique to the bilingual population. However, what is unique, and yet far more common than is appreciated by monolinguals, is the cognitive architecture that subserves bilingual language processing. With bilingualism (and multilingualism) the rule rather than the exception (Grosjean, 1982), this architecture may well be the default structure of the language processing system. As such, it is critical that we understand more fully not only how the processing of more than one language is subserved by the brain, but also how this understanding furthers our knowledge of the cognitive architecture that encapsulates the bilingual mental lexicon. The neurolinguistic approach to bilingualism focuses on determining the manner in which the two (or more) languages are stored in the brain and how they are differentially (or similarly) processed. The underlying assumption is that the acquisition of more than one language requires at the very least a change to or expansion of the existing lexicon, if not the formation of language-specific components, and this is likely to manifest in some way at the physiological level. There are many sources of information, ranging from data on bilingual aphasic patients (Paradis, 1977, 1985, 1997) to lateralization (Vaid, 1983; see Hull & Vaid, 2006, for a review), recordings of event-related potentials (ERPs) (e.g. Ardal et al., 1990; Phillips et al., 2006), and positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies of neurologically intact bilinguals (see Indefrey, 2006; Vaid & Hull, 2002, for reviews). Following the consideration of methodological issues and interpretative limitations that characterize these approaches, the chapter focuses on how the application of these approaches has furthered our understanding of (1) selectivity of bilingual lexical access, (2) distinctions between word types in the bilingual lexicon and (3) control processes that enable language selection.
Resumo:
Following an early claim by Nelson & McEvoy suggesting that word associations can display `spooky action at a distance behaviour', a serious investigation of the potentially quantum nature of such associations is currently underway. In this paper quantum theory is proposed as a framework suitable for modelling the mental lexicon, specifically the results obtained from both intralist and extralist word association experiments. Some initial models exploring this hypothesis are discussed, and they appear to be capable of substantial agreement with pre-existing experimental data. The paper concludes with a discussion of some experiments that will be performed in order to test these models.
Resumo:
Government figures put the current indigenous unemployment rate at around 23%, 3 times the unemployment rate for other Australians. This thesis aims to assess whether Australian indirect discrimination legislation can provide a remedy for one of the causes of indigenous unemployment - the systemic discrimination which can result from the mere operation of established procedures of recruitment and hiring. The impact of those practices on indigenous people is examined in the context of an analysis of anti-discrimination legislation and cases from all Australian jurisdictions from the time of the passing of the Racial Discrimination Act by the Commonwealth in 1975 to the present. The thesis finds a number of reasons why the legislation fails to provide equality of opportunity for indigenous people seeking to enter the workforce. In nearly all jurisdictions it is obscurely drafted, used mainly by educated middle class white women, and provides remedies which tend to be compensatory damages rather than change to recruitment policy. White dominance of the legal process has produced legislative and judicial definitions of "race" and "Aboriginality" which focus on biology rather than cultural difference. In the commissions and tribunals complaints of racial discrimination are often rejected on the grounds of being "vexatious" or "frivolous", not reaching the required standard of proof, or not showing a causal connection between race and the conduct complained of. In all jurisdictions the cornerstone of liability is whether a particular employment term, condition or practice is reasonable. The thesis evaluates the approaches taken by appellate courts, including the High Court, and concludes that there is a trend towards an interpretation of reasonableness which favours employer arguments such as economic rationalism, the maintenance of good industrial relations, managerial prerogative to hire and fire, and the protection of majority rights. The thesis recommends that separate, clearly drafted legislation should be passed to address indigenous disadvantage and that indigenous people should be involved in all stages of the process.
Resumo:
A non-destructive, diffuse reflectance near infrared spectroscopy (DR-NIRS)approach is considered as a potential tool for determining the component-level structural properties of articular cartilage. To this end, DR-NIRS was applied in vitro to detect structural changes, using principal component analysis as the statistical basis for characterization. The results show that this technique, particularly with first-derivative pretreatment, can distinguish normal, intact cartilage from enzymatically digested cartilage. Further, this paper establishes that the use of DR-NIRS enables the probing of the full depth of the uncalcified cartilage matrix, potentially allowing the assessment of degenerative changes in joint tissue, independent of the site of initiation of the osteoarthritic process.
Resumo:
This talk proceeds from the premise that IR should engage in a more substantial dialogue with cognitive science. After all, how users decide relevance, or how they chose terms to modify a query are processes rooted in human cognition. Recently, there has been a growing literature applying quantum theory (QT) to model cognitive phenomena. This talk will survey recent research, in particular, modelling interference effects in human decision making. One aspect of QT will be illustrated - how quantum entanglement can be used to model word associations in human memory. The implications of this will be briefly discussed in terms of a new approach for modelling concept combinations. Tentative links to human adductive reasoning will also be drawn. The basic theme behind this talk is QT can potentially provide a new genre of information processing models (including search) more aligned with human cognition.
Resumo:
Lateral gene transfer (LGT) from prokaryotes to microbial eukaryotes is usually detected by chance through genome-sequencing projects. Here, we explore a different, hypothesis-driven approach. We show that the fitness advantage associated with the transferred gene, typically invoked only in retrospect, can be used to design a functional screen capable of identifying postulated LGT cases. We hypothesized that beta-glucuronidase (gus) genes may be prone to LGT from bacteria to fungi (thought to lack gus) because this would enable fungi to utilize glucuronides in vertebrate urine as a carbon source. Using an enrichment procedure based on a glucose-releasing glucuronide analog (cellobiouronic acid), we isolated two gus(+) ascomycete fungi from soils (Penicillium canescens and Scopulariopsis sp.). A phylogenetic analysis suggested that their gus genes, as well as the gus genes identified in genomic sequences of the ascomycetes Aspergillus nidulans and Gibberella zeae, had been introgressed laterally from high-GC gram(+) bacteria. Two such bacteria (Arthrobacter spp.), isolated together with the gus(+) fungi, appeared to be the descendants of a bacterial donor organism from which gus had been transferred to fungi. This scenario was independently supported by similar substrate affinities of the encoded beta-glucuronidases, the absence of introns from fungal gus genes, and the similarity between the signal peptide-encoding 5' extensions of some fungal gus genes and the Arthrobacter sequences upstream of gus. Differences in the sequences of the fungal 5' extensions suggested at least two separate introgression events after the divergence of the two main Euascomycete classes. We suggest that deposition of glucuronides on soils as a result of the colonization of land by vertebrates may have favored LGT of gus from bacteria to fungi in soils.
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.
Resumo:
There are a number of gel dosimeter calibration methods in contemporary usage. The present study is a detailed Monte Carlo investigation into the accuracy of several calibration techniques. Results show that for most arrangements the dose to gel accurately reflects the dose to water, with the most accurate method involving the use of a large diameter flask of gel into which multiple small fields of varying dose are directed. The least accurate method was found to be that of a long test tube in a water phantom, coaxial with the beam. The large flask method is also the most straightforward and least likely to introduce errors during setup, though, to its detriment, the volume of gel required is much more than other methods.
Resumo:
In plant cells, myosin is believed to be the molecular motor responsible for actin-based motility processes such as cytoplasmic streaming and directed vesicle transport. In an effort to characterize plant myosin, a cDNA encoding a myosin heavy chain was isolated from Arabidopsis thaliana. The predicted product of the MYA1 gene is 173 kDa and is structurally similar to the class V myosins. It is composed of the highly-conserved NH2-terminal "head" domain, a putative calmodulin-binding "neck" domain an alpha-helical coiled-coil domain, and a COOH-terminal domain. Northern blot analysis shows that the Arabidopsis MYA1 gene is expressed in all the major plant tissues (flower, leaf, root, and stem). We suggest that the MYA1 myosin may be involved in a general intracellular transport process in plant cells.
Resumo:
This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.
Resumo:
Algebuckina Waterhole exists as a permanent waterhole near a north-south dirt road and an old trainline on the Oodnadatta Track - lines that once opened up the arid lands of central South Australia but are now bypassed. It also exists as the final and largest freshwater waterhole at the end of the Neales River system. It is a critical biodiversity site, a cultural place and a working environment. It is seen to need a resilient management plan that encompasses diverse interests and impacts. Its managers sense that the theories and practices emerging out of landscape disciplinary systems may be of help. Work-in-progress research towards a management methodology are presented through posing scenarios on how landscape thinking and design, informed by an emergent textual and visual lexicon for water landscapes, can intersect with scientific fieldwork to produce useful and transferable outcomes for Algebuckina.