936 resultados para Data Acquisition Methods.
Resumo:
Objective: Our objective in this paper is to assess diets in the European Union (EU) in relation to the recommendations of the recent World Health Organization/Food and Agriculture Organization expert consultation and to show how diets have changed between 1961 and 2001. Data and methods: Computations make use of FAOSTAT data on food availability at country level linked to a food composition database to convert foods to nutrients. We further explore the growing similarity of diets in the EU by making use of a consumption similarity index. The index provides a single number measure of dietary overlap between countries. Results: The data confirm the excessive consumption by almost all countries of saturated fats, cholesterol and sugars, and the convergence of nutrient intakes across the EU. Whereas in 1961 diets in several European countries were more similar to US diets than to those of other European countries, this is no longer the case; moreover, while EU diets have become more homogeneous, the EU as a whole and the USA have become less similar over time. Conclusions: Although the dominant cause of greater similarity in EU diets over the period studied is increased intakes in Mediterranean countries of saturated fats, cholesterol and sugar, also important are reductions in saturated fat and sugar in some Northern European countries. This suggests that healthy eating messages are finally having an impact on diets; a distinctly European diet may also be emerging.
Resumo:
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet -- matrix-assisted laser desorption/ionisation -- mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. The low-femtomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydroxybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and low-mass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.
Resumo:
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet - matrix-assisted laser desorption/ ionisation - mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. U. Am. Soc. Mass Spectrom. 1998, 9, 166-174). The low-ferntomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydrox-ybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and lowmass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.
Resumo:
Ovarian cancer is characterized by vague, non-specific symptoms, advanced stage at diagnosis and poor overall survival. A nested case control study was undertaken on stored serial serum samples from women who developed ovarian cancer and healthy controls (matched for serum processing and storage conditions as well as attributes such as age) in a pilot randomized controlled trial of ovarian cancer screening. The unique feature of this study is that the women were screened for up to 7 years. The serum samples underwent prefractionation using a reversed-phase batch extraction protocol prior to MALDI-TOF MS data acquisition. Our exploratory analysis shows that combining a single MS peak with CA125 allows statistically significant discrimination at the 5% level between cases and controls up to 12 months in advance of the original diagnosis of ovarian cancer. Such combinations work much better than a single peak or CA125 alone. This paper demonstrates that mass spectra from the low molecular weight serum proteome carry information useful for early detection of ovarian cancer. The next step is to identify the specific biomarkers that make early detection possible.
Resumo:
Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.
Resumo:
The paper draws from three case studies of regional construction firms operating in the UK. The case studies provide new insights into the ways in which such firms strive to remain competitive. Empirical data was derived from multiple interactions with senior personnel from with each firm. Data collection methods included semi-structured interviews, informal interactions, archival research, and workshops. The initial research question was informed by existing resource-based theories of competitiveness and an extensive review of constructionspecific literature. However, subsequent emergent empirical findings progressively pointed towards the need to mobilise alternative theoretical models that emphasise localised learning and embeddedness. The findings point towards the importance of de-centralised structures that enable multiple business units to become embedded within localised markets. A significant degree of autonomy is essential to facilitate entrepreneurial behaviour. In essence, sustained competitiveness was found to rest on the way de-centralised business units enact ongoing processes of localised learning. Once local business units have become embedded within localised markets, the essential challenge is how to encourage continued entrepreneurial behaviour while maintaining some degree of centralised control and coordination. This presents a number of tensions and challenges which play out differently across each of the three case studies.
Resumo:
Although extensively studied within the lidar community, the multiple scattering phenomenon has always been considered a rare curiosity by radar meteorologists. Up to few years ago its appearance has only been associated with two- or three-body-scattering features (e.g. hail flares and mirror images) involving highly reflective surfaces. Recent atmospheric research aimed at better understanding of the water cycle and the role played by clouds and precipitation in affecting the Earth's climate has driven the deployment of high frequency radars in space. Examples are the TRMM 13.5 GHz, the CloudSat 94 GHz, the upcoming EarthCARE 94 GHz, and the GPM dual 13-35 GHz radars. These systems are able to detect the vertical distribution of hydrometeors and thus provide crucial feedbacks for radiation and climate studies. The shift towards higher frequencies increases the sensitivity to hydrometeors, improves the spatial resolution and reduces the size and weight of the radar systems. On the other hand, higher frequency radars are affected by stronger extinction, especially in the presence of large precipitating particles (e.g. raindrops or hail particles), which may eventually drive the signal below the minimum detection threshold. In such circumstances the interpretation of the radar equation via the single scattering approximation may be problematic. Errors will be large when the radiation emitted from the radar after interacting more than once with the medium still contributes substantially to the received power. This is the case if the transport mean-free-path becomes comparable with the instrument footprint (determined by the antenna beam-width and the platform altitude). This situation resembles to what has already been experienced in lidar observations, but with a predominance of wide- versus small-angle scattering events. At millimeter wavelengths, hydrometeors diffuse radiation rather isotropically compared to the visible or near infrared region where scattering is predominantly in the forward direction. A complete understanding of radiation transport modeling and data analysis methods under wide-angle multiple scattering conditions is mandatory for a correct interpretation of echoes observed by space-borne millimeter radars. This paper reviews the status of research in this field. Different numerical techniques currently implemented to account for higher order scattering are reviewed and their weaknesses and strengths highlighted. Examples of simulated radar backscattering profiles are provided with particular emphasis given to situations in which the multiple scattering contributions become comparable or overwhelm the single scattering signal. We show evidences of multiple scattering effects from air-borne and from CloudSat observations, i.e. unique signatures which cannot be explained by single scattering theory. Ideas how to identify and tackle the multiple scattering effects are discussed. Finally perspectives and suggestions for future work are outlined. This work represents a reference-guide for studies focused at modeling the radiation transport and at interpreting data from high frequency space-borne radar systems that probe highly opaque scattering media such as thick ice clouds or precipitating clouds.
Resumo:
The success of Matrix-assisted laser desorption / ionisation (MALDI) in fields such as proteomics has partially but not exclusively been due to the development of improved data acquisition and sample preparation techniques. This has been required to overcome some of the short comings of the commonly used solid-state MALDI matrices such as - cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB). Solid state matrices form crystalline samples with highly inhomogeneous topography and morphology which results in large fluctuations in analyte signal intensity from spot to spot and positions within the spot. This means that efficient tuning of the mass spectrometer can be impeded and the use of MALDI MS for quantitative measurements is severely impeded. Recently new MALDI liquid matrices have been introduced which promise to be an effective alternative to crystalline matrices. Generally the liquid matrices comprise either ionic liquid matrices (ILMs) or a usually viscous liquid matrix which is doped with a UV lightabsorbing chromophore [1-3]. The advantages are that the droplet surface is smooth and relatively uniform with the analyte homogeneously distributed within. They have the ability to replenish a sampling position between shots negating the need to search for sample hot-spots. Also the liquid nature of the matrix allows for the use of additional additives to change the environment to which the analyte is added.
Resumo:
When competing strategies for development programs, clinical trial designs, or data analysis methods exist, the alternatives need to be evaluated in a systematic way to facilitate informed decision making. Here we describe a refinement of the recently proposed clinical scenario evaluation framework for the assessment of competing strategies. The refinement is achieved by subdividing key elements previously proposed into new categories, distinguishing between quantities that can be estimated from preexisting data and those that cannot and between aspects under the control of the decision maker from those that are determined by external constraints. The refined framework is illustrated by an application to a design project for an adaptive seamless design for a clinical trial in progressive multiple sclerosis.
Resumo:
A sampling oscilloscope is one of the main units in automatic pulse measurement system (APMS). The time jitter in waveform samplers is an important error source that affect the precision of data acquisition. In this paper, this kind of error is greatly reduced by using the deconvolution method. First, the probability density function (PDF) of time jitter distribution is determined by the statistical approach, then, this PDF is used as convolution kern to deconvolve with the acquired waveform data with additional averaging, and the result is the waveform data in which the effect of time jitter has been removed, and the measurement precision of APMS is greatly improved. In addition, some computer simulations are given which prove the success of the method given in this paper.
Resumo:
A form of three-dimensional X-ray imaging, called Object 3-D, is introduced, where the relevant subject material is represented as discrete ‘objects’. The surface of each such object is derived accurately from the projections of its outline, and of its other discontinuities, in about ten conventional X-ray views, distributed in solid angle. This technique is suitable for many applications, and permits dramatic savings in radiation exposure and in data acquisition and manipulation. It is well matched to user-friendly interactive displays.
Integrated cytokine and metabolic analysis of pathological responses to parasite exposure in rodents
Resumo:
Parasitic infections cause a myriad of responses in their mammalian hosts, on immune as well as on metabolic level. A multiplex panel of cytokines and metabolites derived from four parasite-rodent models, namely, Plasmodium berghei-mouse, Trypanosoma brucei brucei-mouse, Schistosoma mansoni-mouse, and Fasciola hepatica-rat were statistically coanalyzed. 1H NMR spectroscopy and multivariate statistical analysis were used to characterize the urine and plasma metabolite profiles in infected and noninfected animals. Each parasite generated a unique metabolic signature in the host. Plasma cytokine concentrations were obtained using the ‘Meso Scale Discovery’ multi cytokine assay platform. Multivariate data integration methods were subsequently used to elucidate the component of the metabolic signature which is associated with inflammation and to determine specific metabolic correlates with parasite-induced changes in plasma cytokine levels. For example, the relative levels of acetyl glycoproteins extracted from the plasma metabolite profile in the P. berghei-infected mice were statistically correlated with IFN-γ, whereas the same cytokine was anticorrelated with glucose levels. Both the metabolic and the cytokine data showed a similar spatial distribution in principal component analysis scores plots constructed for the combined murine data, with samples from all infected animals clustering according to the parasite species and whereby the protozoan infections (P. berghei and T. b. brucei) grouped separately from the helminth infection (S. mansoni). For S. mansoni, the main infection-responsive cytokines were IL-4 and IL-5, which covaried with lactate, choline, and D-3-hydroxybutyrate. This study demonstrates that the inherently differential immune response to single and multicellular parasites not only manifests in the cytokine expression, but also consequently imprints on the metabolic signature, and calls for in-depth analysis to further explore direct links between immune features and biochemical pathways.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.