872 resultados para Thematic Text Analysis
Resumo:
The elucidation of spatial variation in the landscape can indicate potential wildlife habitats or breeding sites for vectors, such as ticks or mosquitoes, which cause a range of diseases. Information from remotely sensed data could aid the delineation of vegetation distribution on the ground in areas where local knowledge is limited. The data from digital images are often difficult to interpret because of pixel-to-pixel variation, that is, noise, and complex variation at more than one spatial scale. Landsat Thematic Mapper Plus (ETM+) and Satellite Pour l'Observation de La Terre (SPOT) image data were analyzed for an area close to Douna in Mali, West Africa. The variograms of the normalized difference vegetation index (NDVI) from both types of image data were nested. The parameters of the nested variogram function from the Landsat ETM+ data were used to design the sampling for a ground survey of soil and vegetation data. Variograms of the soil and vegetation data showed that their variation was anisotropic and their scales of variation were similar to those of NDVI from the SPOT data. The short- and long-range components of variation in the SPOT data were filtered out separately by factorial kriging. The map of the short-range component appears to represent the patterns of vegetation and associated shallow slopes and drainage channels of the tiger bush system. The map of the long-range component also appeared to relate to broader patterns in the tiger bush and to gentle undulations in the topography. The results suggest that the types of image data analyzed in this study could be used to identify areas with more moisture in semiarid regions that could support wildlife and also be potential vector breeding sites.
Resumo:
Standard form contracts are typically developed through a negotiated consensus, unless they are proffered by one specific interest group. Previously published plans of work and other descriptions of the processes in construction projects tend to focus on operational issues, or they tend to be prepared from the point of view of one or other of the dominant interest groups. Legal practice in the UK permits those who draft contracts to define their terms as they choose. There are no definitive rulings from the courts that give an indication as to the detailed responsibilities of project participants. The science of terminology offers useful guidance for discovering and describing terms and their meanings in their practical context, but has never been used for defining terms for responsibilities of participants in the construction project management process. Organizational analysis enables the management task to be deconstructed into its elemental parts in order that effective organizational structures can be developed. Organizational mapping offers a useful technique for reducing text-based descriptions of project management roles and responsibilities to a comparable basis. Research was carried out by means of a desk study, detailed analysis of nine plans of work and focus groups representing all aspects of the construction industry. No published plan of work offers definitive guidance. There is an enormous amount of variety in the way that terms are used for identifying responsibilities of project participants. A catalogue of concepts and terms (a “Terminology”) has been compiled and indexed to enable those who draft contracts to choose the most appropriate titles for project participants. The purpose of this terminology is to enable the selection and justification of appropriate terms in order to help define roles. The terminology brings an unprecedented clarity to the description of roles and responsibilities in construction projects and, as such, will be helpful for anyone seeking to assemble a team and specify roles for project participants.
Resumo:
A reference model of Fallible Endgame Play has been implemented and exercised with the chess-engine WILHELM. Past experiments have demonstrated the value of the model and the robustness of decisions based on it: experiments agree well with a Markov Model theory. Here, the reference model is exercised on the well-known endgame KBBKN.
Resumo:
A reference model of Fallible Endgame Play has been implemented and exercised with the chess engine WILHELM. Various experiments have demonstrated the value of the model and the robustness of decisions based on it. Experimental results have also been compared with the theoretical predictions of a Markov model of the endgame and found to be in close agreement.
Resumo:
The potential of the τ-ω model for retrieving the volumetric moisture content of bare and vegetated soil from dual polarisation passive microwave data acquired at single and multiple angles is tested. Measurement error and several additional sources of uncertainty will affect the theoretical retrieval accuracy. These include uncertainty in the soil temperature, the vegetation structure and consequently its microwave singlescattering albedo, and uncertainty in soil microwave emissivity based on its roughness. To test the effects of these uncertainties for simple homogeneous scenes, we attempt to retrieve soil moisture from a number of simulated microwave brightness temperature datasets generated using the τ-ω model. The uncertainties for each influence are estimated and applied to curves generated for typical scenarios, and an inverse model used to retrieve the soil moisture content, vegetation optical depth and soil temperature. The effect of each influence on the theoretical soil moisture retrieval limit is explored, the likelihood of each sensor configuration meeting user requirements is assessed, and the most effective means of improving moisture retrieval indicated.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
We use the third perihelion pass by the Ulysses spacecraft to illustrate and investigate the “flux excess” effect, whereby open solar flux estimates from spacecraft increase with increasing heliocentric distance. We analyze the potential effects of small-scale structure in the heliospheric field (giving fluctuations in the radial component on timescales smaller than 1 h) and kinematic time-of-flight effects of longitudinal structure in the solar wind flow. We show that the flux excess is explained by neither very small-scale structure (timescales < 1 h) nor by the kinematic “bunching effect” on spacecraft sampling. The observed flux excesses is, however, well explained by the kinematic effect of larger-scale (>1 day) solar wind speed variations on the frozen-in heliospheric field. We show that averaging over an interval T (that is long enough to eliminate structure originating in the heliosphere yet small enough to avoid cancelling opposite polarity radial field that originates from genuine sector structure in the coronal source field) is only an approximately valid way of allowing for these effects and does not adequately explain or account for differences between the streamer belt and the polar coronal holes.
Resumo:
Three existing models of Interplanetary Coronal Mass Ejection (ICME) transit between the Sun and the Earth are compared to coronagraph and in situ observations: all three models are found to perform with a similar level of accuracy (i.e. an average error between observed and predicted 1AU transit times of approximately 11 h). To improve long-term space weather prediction, factors influencing CME transit are investigated. Both the removal of the plane of sky projection (as suffered by coronagraph derived speeds of Earth directed CMEs) and the use of observed values of solar wind speed, fail to significantly improve transit time prediction. However, a correlation is found to exist between the late/early arrival of an ICME and the width of the preceding sheath region, suggesting that the error is a geometrical effect that can only be removed by a more accurate determination of a CME trajectory and expansion. The correlation between magnetic field intensity and speed of ejecta at 1AU is also investigated. It is found to be weak in the body of the ICME, but strong in the sheath, if the upstream solar wind conditions are taken into account.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
The images taken by the Heliospheric Imagers (HIs), part of the SECCHI imaging package onboard the pair of STEREO spacecraft, provide information on the radial and latitudinal evolution of the plasma compressed inside corotating interaction regions (CIRs). A plasma density wave imaged by the HI instrument onboard STEREO-B was found to propagate towards STEREO-A, enabling a comparison between simultaneous remotesensing and in situ observations of its structure to be performed. In situ measurements made by STEREO-A show that the plasma density wave is associated with the passage of a CIR. The magnetic field compressed after the CIR stream interface (SI) is found to have a planar distribution. Minimum variance analysis of the magnetic field vectors shows that the SI is inclined at 54° to the orbital plane of the STEREO-A spacecraft. This inclination of the CIR SI is comparable to the inclination of the associated plasma density wave observed by HI. A small-scale magnetic cloud with a flux rope topology and radial extent of 0.08 AU is also embedded prior to the SI. The pitch-angle distribution of suprathermal electrons measured by the STEREO-A SWEA instrument shows that an open magnetic field topology in the cloud replaced the heliospheric current sheet locally. These observations confirm that HI observes CIRs in difference images when a small-scale transient is caught up in the compression region.
Resumo:
BACKGROUND: Serial Analysis of Gene Expression (SAGE) is a powerful tool for genome-wide transcription studies. Unlike microarrays, it has the ability to detect novel forms of RNA such as alternatively spliced and antisense transcripts, without the need for prior knowledge of their existence. One limitation of using SAGE on an organism with a complex genome and lacking detailed sequence information, such as the hexaploid bread wheat Triticum aestivum, is accurate annotation of the tags generated. Without accurate annotation it is impossible to fully understand the dynamic processes involved in such complex polyploid organisms. Hence we have developed and utilised novel procedures to characterise, in detail, SAGE tags generated from the whole grain transcriptome of hexaploid wheat. RESULTS: Examination of 71,930 Long SAGE tags generated from six libraries derived from two wheat genotypes grown under two different conditions suggested that SAGE is a reliable and reproducible technique for use in studying the hexaploid wheat transcriptome. However, our results also showed that in poorly annotated and/or poorly sequenced genomes, such as hexaploid wheat, considerably more information can be extracted from SAGE data by carrying out a systematic analysis of both perfect and "fuzzy" (partially matched) tags. This detailed analysis of the SAGE data shows first that while there is evidence of alternative polyadenylation this appears to occur exclusively within the 3' untranslated regions. Secondly, we found no strong evidence for widespread alternative splicing in the developing wheat grain transcriptome. However, analysis of our SAGE data shows that antisense transcripts are probably widespread within the transcriptome and appear to be derived from numerous locations within the genome. Examination of antisense transcripts showing sequence similarity to the Puroindoline a and Puroindoline b genes suggests that such antisense transcripts might have a role in the regulation of gene expression. CONCLUSION: Our results indicate that the detailed analysis of transcriptome data, such as SAGE tags, is essential to understand fully the factors that regulate gene expression and that such analysis of the wheat grain transcriptome reveals that antisense transcripts maybe widespread and hence probably play a significant role in the regulation of gene expression during grain development.
Resumo:
The recently described cupin superfamily of proteins includes the germin and germinlike proteins, of which the cereal oxalate oxidase is the best characterized. This superfamily also includes seed storage proteins, in addition to several microbial enzymes and proteins with unknown function. All these proteins are characterized by the conservation of two central motifs, usually containing two or three histidine residues presumed to be involved with metal binding in the catalytic active site. The present study on the coding regions of Synechocystis PCC6803 identifies a previously unknown group of 12 related cupins, each containing the characteristic two-motif signature. This group comprises 11 single-domain proteins, ranging in length from 104 to 289 residues, and includes two phosphomannose isomerases and two epimerases involved in cell wall synthesis, a member of the pirin group of nuclear proteins, a possible transcriptional regulator, and a close relative-of a cytochrome c551 from Rhodococcus. Additionally, there is a duplicated, two-domain protein that has close similarity to an oxalate decarboxylase from the fungus Collybia velutipes and that is a putative progenitor of the storage proteins of land plants.
Resumo:
The results of an experimental study into the oxidative degradation of proxies for atmospheric aerosol are presented. We demonstrate that the laser Raman tweezers method can be used successfully to obtain uptake coeffcients for gaseous oxidants on individual aqueous and organic droplets, whilst the size and composition of the droplets is simultaneously followed. A laser tweezers system was used to trap individual droplets containing an unsaturated organic compound in either an aqueous or organic ( alkane) solvent. The droplet was exposed to gas- phase ozone and the reaction kinetics and products followed using Raman spectroscopy. The reactions of three different organic compounds with ozone were studied: fumarate anions, benzoate anions and alpha pinene. The fumarate and benzoate anions in aqueous solution were used to represent components of humic- like substances, HULIS; a alpha- pinene in an alkane solvent was studied as a proxy for biogenic aerosol. The kinetic analysis shows that for these systems the diffusive transport and mass accommodation of ozone is relatively fast, and that liquid- phase di. ffusion and reaction are the rate determining steps. Uptake coe. ffcients, g, were found to be ( 1.1 +/- 0.7) x 10(-5), ( 1.5 +/- 0.7) x 10 (-5) and ( 3.0 - 7.5) x 10 (-3) for the reactions of ozone with the fumarate, benzoate and a- pinene containing droplets, respectively. Liquid- phase bimolecular rate coe. cients for reactions of dissolved ozone molecules with fumarate, benzoate and a- pinene were also obtained: k(fumarate) = ( 2.7 +/- 2) x 10 (5), k(benzoate) = ( 3.5 +/- 3) x 10 (5) and k(alpha-pinene) = ( 1-3) x 10(7) dm(3) mol (-1) s (- 1). The droplet size was found to remain stable over the course of the oxidation process for the HULIS- proxies and for the oxidation of a- pinene in pentadecane. The study of the alpha- pinene/ ozone system is the first using organic seed particles to show that the hygroscopicity of the particle does not increase dramatically over the course of the oxidation. No products were detected by Raman spectroscopy for the reaction of benzoate ions with ozone. One product peak, consistent with aqueous carbonate anions, was observed when following the oxidation of fumarate ions by ozone. Product peaks observed in the reaction of ozone with alpha- pinene suggest the formation of new species containing carbonyl groups.
Resumo:
A cross-platform field campaign, OP3, was conducted in the state of Sabah in Malaysian Borneo between April and July of 2008. Among the suite of observations recorded, the campaign included measurements of NOx and O3 – crucial outputs of any model chemistry mechanism. We describe the measurements of these species made from both the ground site and aircraft. We then use the output from two resolutions of the chemistry transport model p-TOMCAT to illustrate the ability of a global model chemical mechanism to capture the chemistry at the rainforest site. The basic model performance is good for NOx and poor for ozone. A box model containing the same chemical mechanism is used to explore the results of the global model in more depth and make comparisons between the two. Without some parameterization of the nighttime boundary layer – free troposphere mixing (i.e. the use of a dilution parameter), the box model does not reproduce the observations, pointing to the importance of adequately representing physical processes for comparisons with surface measurements. We conclude with a discussion of box model budget calculations of chemical reaction fluxes, deposition and mixing, and compare these results to output from p-TOMCAT. These show the same chemical mechanism behaves similarly in both models, but that emissions and advection play particularly strong roles in influencing the comparison to surface measurements.