129 resultados para INSTRUMENTATION TECHNIQUES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solubility of penciclovir (C10N5O3H17) in a novel film formulation designed for the treatment of cold sores was determined using X-ray, thermal, microscopic and release rate techniques. Solubilities of 0.15–0.23, 0.44, 0.53 and 0.42% (w/w) resulted for each procedure. Linear calibration lines were achieved for experimentally and theoretically determined differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) data. Intra- and inter-batch data precision values were determined; intra values were more precise. Microscopy was additionally useful for examining crystal shape, size distribution and homogeneity of drug distribution within the film. Whereas DSC also determined melting point, XRPD identified polymorphs and release data provided relevant kinetics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although modern control techniques such as eigenstructure assignment have been given extensive coverage in control literature there is a reluctance to use them in practice as they are often not believed to be as `visible' or as simple as classical methods. A simple aircraft example is used, and it is shown that eigenstructure assignment can be used easily to produce a more viable controller than with simple classical techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grassland restoration is the dominant activity funded by agri-environment schemes (AES). However, the re-instatement of biodiversity and ecosystem services is limited by a number of severe abiotic and biotic constraints resulting from previous agricultural management. These appear to be less severe on ex-arable sites compared with permanent grassland. We report findings of a large research programme into practical solutions to these constraints. The key abiotic constraint was high residual soil fertility, particularly phosphorus. This can most easily be addressed by targeting of sites of low nutrient status. The chief biotic constraints were lack of propagules of desirable species and suitable sites for their establishment. Addition of seed mixtures or green hay to gaps created by either mechanical disturbance or herbicide was the most effective means of overcoming these factors. Finally, manipulation of biotic interactions, including hemiparasitic plants to reduce competition from grasses and control of mollusc herbivory of sown species, significantly improved the effectiveness of these techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: Quinolone antibiotics are the agents of choice for treating systemic Salmonella infections. Resistance to quinolones is usually mediated by mutations in the DNA gyrase gene gyrA. Here we report the evaluation of standard HPLC equipment for the detection of mutations (single nucleotide polymorphisms; SNPs) in gyrA, gyrB, parC and parE by denaturing high performance liquid chromatography (DHPLC). Methods: A panel of Salmonella strains was assembled which comprised those with known different mutations in gyrA (n = 8) and fluoroquinolone-susceptible and -resistant strains (n = 50) that had not been tested for mutations in gyrA. Additionally, antibiotic-susceptible strains of serotypes other than Salmonella enterica serovar Typhimurium strains were examined for serotype-specific mutations in gyrB (n = 4), parC (n = 6) and parE (n = 1). Wild-type (WT) control DNA was prepared from Salmonella Typhimurium NCTC 74. The DNA of respective strains was amplified by PCR using Optimase (R) proofreading DNA polymerase. Duplex DNA samples were analysed using an Agilent A1100 HPLC system with a Varian Helix (TM) DNA column. Sequencing was used to validate mutations detected by DHPLC in the strains with unknown mutations. Results: Using this HPLC system, mutations in gyrA, gyrB, parC and parE were readily detected by comparison with control chromatograms. Sequencing confirmed the gyrA predicted mutations as detected by DHPLC in the unknown strains and also confirmed serotype-associated sequence changes in non-Typhimurium serotypes. Conclusions: The results demonstrated that a non-specialist standard HPLC machine fitted with a generally available column can be used to detect SNPs in gyrA, gyrB, parC and parE genes by DHPLC. Wider applications should be possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.