30 resultados para Multiple-use forestry
em Aston University Research Archive
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
1. The responses of the electrically stimulated guinea-pig ileum and vas deferens to human and rat calcitonin gene-related peptide (CGRP) and amylin were investigated. 2. The inhibition of contraction of the ileum produced by human alpha CGRP was antagonized by human alpha CGRP8-37 (apparent pA2 estimated at 7.15 +/- 0.23) > human alpha CGRP19-37 (apparent pA2 estimated as 6.67 +/- 0.33) > [Tyr0]-human alpha CGRP28-37. The amylin antagonist, AC187, was three fold less potent than CGRP8-37 in antagonizing human alpha CGRP. 3. Both human beta- and rat alpha CGRP inhibited contractions of the ileum, but this was less sensitive to inhibition by CGRP8-37 than the effect of human alpha CGRP. However, CGRP19-37 was twenty times more effective in inhibiting the response to rat alpha CGRP (apparent pA2 estimated as 8.0 +/- 0.1) compared to human alpha CGRP. 4. Rat amylin inhibited contractions in about 10% of ileal preparations; this effect was not antagonized by any CGRP fragment. Human amylin had no action on this preparation. 5. Both human and rat alpha CGRP inhibited electrically stimulated contractions of the vas deferens, which were not antagonized by 3 microM CGRP8-37 or 10 microM AC187. 6. Rat amylin inhibited the stimulated contractions of the vas deferens (EC50 = 77 +/- 9 nM); human amylin was less potent (EC50 = 213 +/- 22 nM). The response to rat amylin was antagonized by 10 microM CGRP8-37 (EC50 = 242 +/- 25 nM) and 10 microM AC187 (EC50 = 610 +/- 22 nM). 7. It is concluded that human alpha CGRP relaxes the guinea-pig ileum via CGRP1-like receptors, but that human beta CGRP and rat alpha CGRP may use additional receptors. These are distinct CGRP2-like and amylin receptors on guinea-pig vas deferens.
Resumo:
The availability of ‘omics’ technologies is transforming scientific approaches to physiological problems from a reductionist viewpoint to that of a holistic viewpoint. This is of profound importance in nutrition, since the integration of multiple systems at the level of gene expression on the synthetic side through to metabolic enzyme activity on the degradative side combine to govern nutrient availability to tissues. Protein activity is central to the process of nutrition from the initial absorption of nutrients via uptake carriers in the gut, through to distribution and transport in the blood, metabolism by degradative enzymes in tissues and excretion through renal tubule exchange proteins. Therefore, the global profiling of the proteome, defined as the entire protein complement of the genome expressed in a particular cell or organ, or in plasma or serum at a particular time, offers the potential for identification of important biomarkers of nutritional state that respond to alterations in diet. The present review considers the published evidence of nutritional modulation of the proteome in vivo which has expanded exponentially over the last 3 years. It highlights some of the challenges faced by researchers using proteomic approaches to understand the interactions of diet with genomic and metabolic–phenotypic variables in normal populations.
Resumo:
Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
The rapidly increasing demand for cellular telephony is placing greater demand on the limited bandwidth resources available. This research is concerned with techniques which enhance the capacity of a Direct-Sequence Code-Division-Multiple-Access (DS-CDMA) mobile telephone network. The capacity of both Private Mobile Radio (PMR) and cellular networks are derived and the many techniques which are currently available are reviewed. Areas which may be further investigated are identified. One technique which is developed is the sectorisation of a cell into toroidal rings. This is shown to provide an increased system capacity when the cell is split into these concentric rings and this is compared with cell clustering and other sectorisation schemes. Another technique for increasing the capacity is achieved by adding to the amount of inherent randomness within the transmitted signal so that the system is better able to extract the wanted signal. A system model has been produced for a cellular DS-CDMA network and the results are presented for two possible strategies. One of these strategies is the variation of the chip duration over a signal bit period. Several different variation functions are tried and a sinusoidal function is shown to provide the greatest increase in the maximum number of system users for any given signal-to-noise ratio. The other strategy considered is the use of additive amplitude modulation together with data/chip phase-shift-keying. The amplitude variations are determined by a sparse code so that the average system power is held near its nominal level. This strategy is shown to provide no further capacity since the system is sensitive to amplitude variations. When both strategies are employed, however, the sensitivity to amplitude variations is shown to reduce, thus indicating that the first strategy both increases the capacity and the ability to handle fluctuations in the received signal power.
Resumo:
Quantum dots (Qdots) are fluorescent nanoparticles that have great potential as detection agents in biological applications. Their optical properties, including photostability and narrow, symmetrical emission bands with large Stokes shifts, and the potential for multiplexing of many different colours, give them significant advantages over traditionally used fluorescent dyes. Here, we report the straightforward generation of stable, covalent quantum dot-protein A/G bioconjugates that will be able to bind to almost any IgG antibody, and therefore can be used in many applications. An additional advantage is that the requirement for a secondary antibody is removed, simplifying experimental design. To demonstrate their use, we show their application in multiplexed western blotting. The sensitivity of Qdot conjugates is found to be superior to fluorescent dyes, and comparable to, or potentially better than, enhanced chemiluminescence. We show a true biological validation using a four-colour multiplexed western blot against a complex cell lysate background, and have significantly improved previously reported non-specific binding of the Qdots to cellular proteins.
Resumo:
Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.
Resumo:
The thesis examines Kuhn's (1962, 1970) concept of paradigm, assesses how it is employed for mapping intellectual terrain in the social sciences, and evaluates it's use in research based on multiple theory positions. In so doing it rejects both the theses of total paradigm 'incommensurability' (Kuhn, 1962), and also of liberal 'translation' (Popper, 1970), in favour of a middle ground through the 'language-game of everyday life' (Wittgenstein, 1953). The thesis ultimately argues for the possibility of being 'trained-into' new paradigms, given the premise that 'unorganised experience cannot order perception' (Phillips, 1977). In conducting multiple paradigm research the analysis uses the Burrell and Morgan (1979) model for examining the work organisation of a large provincial fire Service. This analysis accounts for firstly, a 'functionalist' assessment of work design, demonstrating inter alia the decrease in reported motivation with length of service; secondly, an 'interpretive' portrayal of the daily accomplishment of task routines, highlighting the discretionary and negotiated nature of the day's events; thirdly, a 'radical humanist' analysis of workplace ideology, demonstrating the hegemonic role of officer training practices; and finally, a 'radical structuralist' description of the labour process, focusing on the establishment of a 'normal working day'. Although the argument is made for the possibility of conducting multiple paradigm research, the conclusion stresses the many institutional pressures serving to offset development.
Resumo:
An investigator may also wish to select a small subset of the X variables which give the best prediction of the Y variable. In this case, the question is how many variables should the regression equation include? One method would be to calculate the regression of Y on every subset of the X variables and choose the subset that gives the smallest mean square deviation from the regression. Most investigators, however, prefer to use a ‘stepwise multiple regression’ procedure. There are two forms of this analysis called the ‘step-up’ (or ‘forward’) method and the ‘step-down’ (or ‘backward’) method. This Statnote illustrates the use of stepwise multiple regression with reference to the scenario introduced in Statnote 24, viz., the influence of climatic variables on the growth of the crustose lichen Rhizocarpon geographicum (L.)DC.
Resumo:
Satellite information, in combination with conventional point source measurements, can be a valuable source of information. This thesis is devoted to the spatial estimation of areal rainfall over a region using both the measurements from a dense and sparse network of rain-gauges and images from the meteorological satellites. A primary concern is to study the effects of such satellite assisted rainfall estimates on the performance of rainfall-runoff models. Low-cost image processing systems and peripherals are used to process and manipulate the data. Both secondary as well as primary satellite images were used for analysis. The secondary data was obtained from the in-house satellite receiver and the primary data was obtained from an outside source. Ground truth data was obtained from the local Water Authority. A number of algorithms are presented that combine the satellite and conventional data sources to produce areal rainfall estimates and the results are compared with some of the more traditional methodologies. The results indicate that the satellite cloud information is valuable in the assessment of the spatial distribution of areal rainfall, for both half-hourly as well as daily estimates of rainfall. It is also demonstrated how the performance of the simple multiple regression rainfall-runoff model is improved when satellite cloud information is used as a separate input in addition to rainfall estimates from conventional means. The use of low-cost equipment, from image processing systems to satellite imagery, makes it possible for developing countries to introduce such systems in areas where the benefits are greatest.
Resumo:
OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
Resumo:
Introduction - Rheumatoid arthritis (RA) associates with excessive cardiovascular morbidity and mortality, attributed to both traditional and novel cardiovascular risk factors. The metabolic syndrome, a cluster of classical cardiovascular risk factors, including hypertension, obesity, glucose intolerance, and dyslipidaemia, is highly prevalent in RA. Reports suggest that long-term glucocorticoid (GC) use may exacerbate individual cardiovascular risk factors, but there have been no studies in RA to assess whether it associates with the metabolic syndrome. We examined whether GC exposure associates with the presence of metabolic syndrome in patients with RA. Methods - RA patients (n = 398) with detailed clinical and laboratory assessments were categorised into three groups according to GC exposure: no/limited (<3 months) exposure (NE), low-dose (<7.5 mg/day) long-term exposure (LE), and medium-dose (greater than or equal to 7.5 mg to 30 mg/day) long-term exposure (ME). The metabolic syndrome was defined using the National Cholesterol Education Programme III guidelines. The association of GC exposure with the metabolic syndrome was evaluated using binary logistic regression. Results - The metabolic syndrome was present in 40.1% of this population and its prevalence did not differ significantly between the GC exposure groups (NE 37.9% versus LE 40.7% versus ME 50%, P = 0.241). Binary logistic regression did not demonstrate any increased odds for the metabolic syndrome when comparing ME with LE (odds ratio = 1.64, 95% confidence interval 0.92 to 2.92, P = 0.094) and remained non significant after adjusting for multiple potential confounders. Conclusions - Long-term GC exposure does not appear to associate with a higher prevalence of the metabolic syndrome in patients with RA. The components of the metabolic syndrome may already be extensively modified by other processes in RA (including chronic inflammation and treatments other than GCs), leaving little scope for additive effects of GCs.
Resumo:
Objective: To investigate the dynamics of communication within the primary somatosensory neuronal network. Methods: Multichannel EEG responses evoked by median nerve stimulation were recorded from six healthy participants. We investigated the directional connectivity of the evoked responses by assessing the Partial Directed Coherence (PDC) among five neuronal nodes (brainstem, thalamus and three in the primary sensorimotor cortex), which had been identified by using the Functional Source Separation (FSS) algorithm. We analyzed directional connectivity separately in the low (1-200. Hz, LF) and high (450-750. Hz, HF) frequency ranges. Results: LF forward connectivity showed peaks at 16, 20, 30 and 50. ms post-stimulus. An estimate of the strength of connectivity was modulated by feedback involving cortical and subcortical nodes. In HF, forward connectivity showed peaks at 20, 30 and 50. ms, with no apparent feedback-related strength changes. Conclusions: In this first non-invasive study in humans, we documented directional connectivity across subcortical and cortical somatosensory pathway, discriminating transmission properties within LF and HF ranges. Significance: The combined use of FSS and PDC in a simple protocol such as median nerve stimulation sheds light on how high and low frequency components of the somatosensory evoked response are functionally interrelated in sustaining somatosensory perception in healthy individuals. Thus, these components may potentially be explored as biomarkers of pathological conditions. © 2012 International Federation of Clinical Neurophysiology.
Resumo:
In this article, a social theory framework is developed to explain the common themes of recursive and adaptive practice underpinning the existing strategic management literature. In practice, there is a coexistent tension between recursive and adaptive forms of strategic action that spans multiple levels from macro-institutional and competitive contexts to within-firm levels of analysis to individual cognition. This tension may be better understood by examining how management practices are used to put strategy into practice. Such practices span multiple levels of context and are adaptable to their circumstances of use, serving to highlight both general characteristics and localized idiosyncrasies of strategy as practice. The article develops the concept of management practices-in-use into a research agenda and nine broad research questions that may be used to investigate empirically strategy as practice.