855 resultados para synchroton-based techniques
Resumo:
Many time series are measured monthly, either as averages or totals, and such data often exhibit seasonal variability-the values of the series are consistently larger for some months of the year than for others. A typical series of this type is the number of deaths each month attributed to SIDS (Sudden Infant Death Syndrome). Seasonality can be modelled in a number of ways. This paper describes and discusses various methods for modelling seasonality in SIDS data, though much of the discussion is relevant to other seasonally varying data. There are two main approaches, either fitting a circular probability distribution to the data, or using regression-based techniques to model the mean seasonal behaviour. Both are discussed in this paper.
Resumo:
Attitudes to floristics have changed considerably during the past few decades as a result of increasing and often more focused consumer demands, heightened awareness of the threats to biodiversity, information flow and overload, and the application of electronic and web-based techniques to information handling and processing. This paper will examine these concerns in relation to our floristic knowledge and needs in the region of SW Asia. Particular reference will be made to the experience gained from the Euro+Med PlantBase project for the preparation of an electronic plant-information system for Europe and the Mediterranean, with a single core list of accepted plant names and synonyms, based on consensus taxonomy agreed by a specialist network. The many challenges Ð scientific, technical and organisational Ð that it has presented will be discussed as well as the problems of handling nontaxonomic information from fields such as conservation, karyology, biosystematics and mapping. The question of regional cooperation and the sharing of efforts and resources will also be raised and attention drawn to the recent planning workshop held in Rabat (May 2002) for establishing a technical cooperation network for taxonomic capacity building in North Africa as a possible model for the SW Asia region.
Resumo:
Mitochondrial DNA (mtDNA) mutations are an important cause of genetic disease and have been proposed to play a role in the ageing process. Quantification of total mtDNA mutation load in ageing tissues is difficult as mutational events are rare in a background of wild-type molecules, and detection of individual mutated molecules is beyond the sensitivity of most sequencing based techniques. The methods currently most commonly used to document the incidence of mtDNA point mutations in ageing include post-PCR cloning, single-molecule PCR and the random mutation capture assay. The mtDNA mutation load obtained by these different techniques varies by orders of magnitude, but direct comparison of the three techniques on the same ageing human tissue has not been performed. We assess the procedures and practicalities involved in each of these three assays and discuss the results obtained by investigation of mutation loads in colonic mucosal biopsies from ten human subjects.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
Traditionally representation of competencies has been very difficult using computer-based techniques. This paper introduces competencies, how they are represented, and the related concept of competency frameworks and the difficulties in using traditional ontology techniques to formalise them. A “vaguely” formalised framework has been developed within the EU project TRACE and is presented. The framework can be used to represent different competencies and competency frameworks. Through a case study using an example from the IT sector, it is shown how these can be used by individuals and organisations to specify their individual competency needs. Furthermore it is described how these representations are used for comparisons between different specifications applying ontologies and ontology toolsets. The end result is a comparison that is not binary, but tertiary, providing “definite matches”, possible / partial matches, and “no matches” using a “traffic light” analogy.
Resumo:
Mitochondrial DNA (mtDNA) mutations are an important cause of genetic disease and have been proposed to play a role in the ageing process. Quantification of total mtDNA mutation load in ageing tissues is difficult as mutational events are rare in a background of wild-type molecules, and detection of individual mutated molecules is beyond the sensitivity of most sequencing based techniques. The methods currently most commonly used to document the incidence of mtDNA point mutations in ageing include post-PCR cloning, single-molecule PCR and the random mutation capture assay. The mtDNA mutation load obtained by these different techniques varies by orders of magnitude, but direct comparison of the three techniques on the same ageing human tissue has not been performed. We assess the procedures and practicalities involved in each of these three assays and discuss the results obtained by investigation of mutation loads in colonic mucosal biopsies from ten human subjects.
Resumo:
This paper presents novel observer-based techniques for the estimation of flow demands in gas networks, from sparse pressure telemetry. A completely observable model is explored, constructed by incorporating difference equations that assume the flow demands are steady. Since the flow demands usually vary slowly with time, this is a reasonable approximation. Two techniques for constructing robust observers are employed: robust eigenstructure assignment and singular value assignment. These techniques help to reduce the effects of the system approximation. Modelling error may be further reduced by making use of known profiles for the flow demands. The theory is extended to deal successfully with the problem of measurement bias. The pressure measurements available are subject to constant biases which degrade the flow demand estimates, and such biases need to be estimated. This is achieved by constructing a further model variation that incorporates the biases into an augmented state vector, but now includes information about the flow demand profiles in a new form.
Resumo:
A new record of sea surface temperature (SST) for climate applications is described. This record provides independent corroboration of global variations estimated from SST measurements made in situ. Infrared imagery from Along-Track Scanning Radiometers (ATSRs) is used to create a 20 year time series of SST at 0.1° latitude-longitude resolution, in the ATSR Reprocessing for Climate (ARC) project. A very high degree of independence of in situ measurements is achieved via physics-based techniques. Skin SST and SST estimated for 20 cm depth are provided, with grid cell uncertainty estimates. Comparison with in situ data sets establishes that ARC SSTs generally have bias of order 0.1 K or smaller. The precision of the ARC SSTs is 0.14 K during 2003 to 2009, from three-way error analysis. Over the period 1994 to 2010, ARC SSTs are stable, with better than 95% confidence, to within 0.005 K yr−1(demonstrated for tropical regions). The data set appears useful for cleanly quantifying interannual variability in SST and major SST anomalies. The ARC SST global anomaly time series is compared to the in situ-based Hadley Centre SST data set version 3 (HadSST3). Within known uncertainties in bias adjustments applied to in situ measurements, the independent ARC record and HadSST3 present the same variations in global marine temperature since 1996. Since the in situ observing system evolved significantly in its mix of measurement platforms and techniques over this period, ARC SSTs provide an important corroboration that HadSST3 accurately represents recent variability and change in this essential climate variable.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
There are a range of studies based in the low carbon arena which use various ‘futures’- based techniques as ways of exploring uncertainties. These techniques range from ‘scenarios’ and ‘roadmaps’ through to ‘transitions’ and ‘pathways’ as well as ‘vision’-based techniques. The overall aim of the paper is therefore to compare and contrast these techniques to develop a simple working typology with the further objective of identifying the implications of this analysis for RETROFIT 2050. Using recent examples of city-based and energy-based studies throughout, the paper compares and contrasts these techniques and finds that the distinctions between them have often been blurred in the field of low carbon. Visions, for example, have been used in both transition theory and futures/Foresight methods, and scenarios have also been used in transition-based studies as well as futures/Foresight studies. Moreover, Foresight techniques which capture expert knowledge and map existing knowledge to develop a set of scenarios and roadmaps which can inform the development of transitions and pathways can not only help potentially overcome any ‘disconnections’ that may exist between the social and the technical lenses in which such future trajectories are mapped, but also promote a strong ‘co-evolutionary’ content.
Resumo:
Recently, Corpus Linguistics has become a popular research tool in the field of German as a Foreign Language. However, little attention has been paid to teaching and learning potentials that corpora and corpus-based teaching offer. This paper seeks to demonstrate some of the ways in which corpus-based techniques can be used for teaching purposes, even by those who have little experience in Corpus Linguistics. The focus will be on teaching and learning German for Academic Purposes in German Studies abroad.
Resumo:
Trypanosoma cruzi and Trypanosoma rangeli are human-infective blood parasites, largely restricted to Central and South America. They also infect a wide range of wild and domestic mammals and are transmitted by a numerous species of triatomine bugs. There are significant overlaps in the host and geographical ranges of both species. The two species consist of a number of distinct phylogenetic lineages. A range of PCR-based techniques have been developed to differentiate between these species and to assign their isolates into lineages. However, the existence of at least six and five lineages within T. cruzi and T. rangeli, respectively, makes identification of the full range of isolates difficult and time consuming. Here we have applied fluorescent fragment length barcoding (FFLB) to the problem of identifying and genotyping T. cruzi, T. rangeli and other South American trypanosomes. This technique discriminates species on the basis of length polymorphism of regions of the rDNA locus. FFLB was able to differentiate many trypanosome species known from South American mammals: T. cruzi cruzi. T. cruzi marinkellei, T. dionisii-like, T. evansi, T. lewisi, T. rangeli, T. theileri and T. vivax. Furthermore, all five T. rangeli lineages and many T. cruzi lineages could be identified, except the hybrid lineages TcV and TcVI that could not be distinguished from lineages III and II respectively. This method also allowed identification of mixed infections of T. cruzi and T. rangeli lineages in naturally infected triatomine bugs. The ability of FFLB to genotype multiple lineages of T. cruzi and T. rangeli together with other trypanosome species, using the same primer sets is an advantage over other currently available techniques. Overall, these results demonstrate that FFLB is a useful method for species diagnosis, genotyping and understanding the epidemiology of American trypanosomes. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In all applications of clone detection it is important to have precise and efficient clone identification algorithms. This paper proposes and outlines a new algorithm, KClone for clone detection that incorporates a novel combination of lexical and local dependence analysis to achieve precision, while retaining speed. The paper also reports on the initial results of a case study using an implementation of KClone with which we have been experimenting. The results indi- cate the ability of KClone to find types-1,2, and 3 clones compared to token-based and PDG-based techniques. The paper also reports results of an initial empirical study of the performance of KClone compared to CCFinderX.
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies