896 resultados para analytical techniques
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
A study was performed to investigate the value of near infrared reflectance spectroscopy (NIRS) as an alternate method to analytical techniques for identifying QTL associated with feed quality traits. Milled samples from an F6-derived recombinant inbred Tallon/Scarlett population were incubated in the rumen of fistulated cattle, recovered, washed and dried to determine the in-situ dry matter digestibility (DMD). Both pre- and post-digestion samples were analysed using NIRS to quantify key quality components relating to acid detergent fibre, starch and protein. This phenotypic data was used to identify trait associated QTL and compare them to previously identified QTL. Though a number of genetic correlations were identified between the phenotypic data sets, the only correlation of most interest was between DMD and starch digested (r = -0.382). The significance of this genetic correlation was that the NIRS data set identified a putative QTL on chromosomes 7H (LOD = 3.3) associated with starch digested. A QTL for DMD occurred in the same region of chromosome 7H, with flanking markers fAG/CAT63 and bPb-0758. The significant correlation and identification of this putative QTL, highlights the potential of technologies like NIRS in QTL analysis.
Resumo:
BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.
Resumo:
Reverse osmosis (RO) brine produced at a full-scale coal seam gas (CSG) water treatment facility was characterized with spectroscopic and other analytical techniques. A number of potential scalants including silica, calcium, magnesium, sulphates and carbonates, all of which were present in dissolved and non-dissolved forms, were characterized. The presence of spherical particles with a size range of 10–1000 nm and aggregates of 1–10 microns was confirmed by transmission electron microscopy (TEM). Those particulates contained the following metals in decreasing order: K, Si, Sr, Ca, B, Ba, Mg, P, and S. Characterization showed that nearly one-third of the total silicon in the brine was present in the particulates. Further, analysis of the RO brine suggested supersaturation and precipitation of metal carbonates and sulphates during the RO process should take place and could be responsible for subsequently capturing silica in the solid phase. However, the precipitation of crystalline carbonates and sulphates are complex. X-ray diffraction analysis did not confirm the presence of common calcium carbonates or sulphates but instead showed the presence of a suite of complex minerals, to which amorphous silica and/or silica rich compounds could have adhered. A filtration study showed that majority of the siliceous particles were less than 220 nm in size, but could still be potentially captured using a low molecular weight ultrafiltration membrane.
Using Big Data to manage safety-related risk in the upstream oil and gas industry: A research agenda
Resumo:
Despite considerable effort and a broad range of new approaches to safety management over the years, the upstream oil & gas industry has been frustrated by the sector’s stubbornly high rate of injuries and fatalities. This short communication points out, however, that the industry may be in a position to make considerable progress by applying “Big Data” analytical tools to the large volumes of safety-related data that have been collected by these organizations. Toward making this case, we examine existing safety-related information management practices in the upstream oil & gas industry, and specifically note that data in this sector often tends to be highly customized, difficult to analyze using conventional quantitative tools, and frequently ignored. We then contend that the application of new Big Data kinds of analytical techniques could potentially reveal patterns and trends that have been hidden or unknown thus far, and argue that these tools could help the upstream oil & gas sector to improve its injury and fatality statistics. Finally, we offer a research agenda toward accelerating the rate at which Big Data and new analytical capabilities could play a material role in helping the industry to improve its health and safety performance.
Resumo:
Protein modification via enzymatic cross-linking is an attractive way for altering food structure so as to create products with increased quality and nutritional value. These modifications are expected to affect not only the structure and physico-chemical properties of proteins but also their physiological characteristics, such as digestibility in the GI-tract and allergenicity. Protein cross-linking enzymes such as transglutaminases are currently commercially available, but also other types of cross-linking enzymes are being explored intensively. In this study, enzymatic cross-linking of β-casein, the most abundant bovine milk protein, was studied. Enzymatic cross-linking reactions were performed by fungal Trichoderma reesei tyrosinase (TrTyr) and the performance of the enzyme was compared to that of transglutaminase from Streptoverticillium mobaraense (Tgase). Enzymatic cross-linking reactions were followed by different analytical techniques, such as size exclusion chromatography -Ultra violet/Visible multi angle light scattering (SEC-UV/Vis-MALLS), phosphorus nuclear magnetic resonance spectroscopy (31P-NMR), atomic force (AFM) and matrix-assisted laser desorption/ionisation-time of flight mass spectrometry (MALDI-TOF MS). The research results showed that in both cases cross-linking of β-casein resulted in the formation of high molecular mass (MM ca. 1 350 kg mol-1), disk-shaped nanoparticles when the highest enzyme dosage and longest incubation times were used. According to SEC-UV/Vis-MALLS data, commercial β-casein was cross-linked almost completely when TrTyr and Tgase were used as cross-linking enzymes. In the case of TrTyr, high degree of cross-linking was confirmed by 31P-NMR where it was shown that 91 % of the tyrosine side-chains were involved in the cross-linking. The impact of enzymatic cross-linking of β-casein on in vitro digestibility by pepsin was followed by various analytical techniques. The research results demonstrated that enzymatically cross-linked β-casein was stable under the acidic conditions present in the stomach. Furthermore, it was found that cross-linked β-casein was more resistant to pepsin digestion when compared to that of non modified β-casein. The effects of enzymatic cross-linking of β-casein on allergenicity were also studied by different biochemical test methods. On the basis of the research results, enzymatic cross-linking decreased allergenicity of native β-casein by 14 % when cross-linked by TrTyr and by 6 % after treatment by Tgase. It can be concluded that in addition to the basic understanding of the reaction mechanism of TrTyr on protein matrix, the research results obtained in this study can have high impact on various applications like food, cosmetic, medical, textile and packing sectors.
Resumo:
The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.
Resumo:
Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.
Resumo:
Results of an investigation dealing with the behaviour of grid-connected induction generators (GCIGs) driven by typical prime movers such as mini-hydro/wind turbines are presented. Certain practical operational problems of such systems are identified. Analytical techniques are developed to study the behavior of such systems. The system consists of the induction generator (IG) feeding a 11 kV grid through a step-up transformer and a transmission line. Terminal capacitors to compensate for the lagging VAr are included in the study. Computer simulation was carried out to predict the system performance at the given input power from the turbine. Effects of variations in grid voltage, frequency, input power, and terminal capacitance on the machine and system performance are studied. An analysis of self-excitation conditions on disconnection of supply was carried out. The behavior of a 220 kW hydel system and 55/11 kW and 22 kW wind driven system corresponding to actual field conditions is discussed
Resumo:
A combination of numerical and analytical techniques is used to analyse the effect of magnetic field and encapsulated layer on the onset of oscillatory Marangoni instability in a two layer system. Oscillatory Marangoni instability is possible for a deformed free surface only when the system is heated from above. It is observed that the existence of a second layer has a positive effect on Marangoni overstability with magnetic field whereas it has an opposite effect without magnetic field.
Resumo:
We are concerned with the situation in which a wireless sensor network is deployed in a region, for the purpose of detecting an event occurring at a random time and at a random location. The sensor nodes periodically sample their environment (e.g., for acoustic energy),process the observations (in our case, using a CUSUM-based algorithm) and send a local decision (which is binary in nature) to the fusion centre. The fusion centre collects these local decisions and uses a fusion rule to process the sensors’ local decisions and infer the state of nature, i.e., if an event has occurred or not. Our main contribution is in analyzing two local detection rules in combination with a simple fusion rule. The local detection algorithms are based on the nonparametric CUSUMprocedure from sequential statistics. We also propose two ways to operate the local detectors after an alarm. These alternatives when combined in various ways yield several approaches. Our contribution is to provide analytical techniques to calculate false alarm measures, by the use of which the local detector thresholds can be set. Simulation results are provided to evaluate the accuracy of our analysis. As an illustration we provide a design example. We also use simulations to compare the detection delays incurred in these algorithms.
Resumo:
NiTi thin-films were deposited by DC magnetron sputtering from single alloy target (Ni/Ti: 45/55 aL.%). The rate of deposition and thickness of sputter deposited films were maintained to similar to 35 nm min(-1) and 4 mu m respectively. A set of sputter deposited NiTi films were selected for specific chemical treatment with the solution comprising of de-ionized water, HF and HNO3 respectively. The influence of chemical treatment on surface characteristics of NiTi films before and after chemical treatment was investigated for their structure, micro-structure and composition using different analytical techniques. Prior to chemical treatment, the composition of NiTi films using energy dispersive X-ray dispersive spectroscopy (EDS), were found to be 51.8 atomic percent of Ti and 48.2 atomic percent of Ni. The structure and morphology of these films were investigated by X-ray diffraction (XRD) and scanning electron microscopy (SEM). XRD investigations, demonstrated the presence of dominant Austenite (110) phase along with Martensite phase, for untreated NiTi films whereas some additional diffraction peaks viz. (100), (101), and (200) corresponding to Rutile and Anatase phase of Titanium dioxide (TiO2) along with parent Austenite (110) phase were observed for chemically treated NiTi films. FTIR studies, it can be concluded that chemically treated films have higher tendency to form metal oxide/hydroxide than the untreated NiTi films. XPS investigations, demonstrated the presence of Ni-free surface and formation of a protective metal oxide (TiO2) layer on the surface of the films, in both the cases. The extent of the formation of surface oxide layer onto the surface of NiTi films has enhanced after chemical treatment. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Mesoporous quaternary bioactive glasses and glass-ceramic with alkali-alkaline-earth oxide were successfully synthesized by using non-ionic block copolymer P123 and evaporation induced self assembly (EISA) process followed by acid treatment assisted sal-gel method. As prepared samples has been characterized for the structural, morphological and textural properties with the various analytical techniques. Glass dissolution/ion release rate in simulated body fluid (SBF) was monitored by inductively coupled plasma (ICP) emission spectroscopy, whereas the formation of apatite phase and its crystallization at the glass and glass-ceramic surface was examined by structural, textural and microscopic probes. The influence of alkaline-earth oxide content on the glass structure followed by textural property has become more evident. The pristine glass samples exhibit a wormhole-like mesoporous structure, whereas the glass-ceramic composition is found to be in three different phases, namely crystalline hydroxyapatite, wollastonite and a residual glassy phase as observed in Cerabone (R) A/W. The existence of calcium orthophosphate phase is closely associated with the pore walls comprising nanometric-sized ``inclusions''. The observed high surface area in conjunction with the structural features provides the possible explanation for experimentally observed enhanced bioactivity through the easy access of ions to the fluid. On the other hand, presence of multiple phases in glass-ceramic sample inhibits or delays the kinetics of apatite formation. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Nonhomologous end joining (NHEJ) of DNA double strand breaks (DSBs) inside cells can be selectively inhibited by 5,6-bis-(benzylideneamino)-2-mercaptopyrimidin-4-ol (SCR7) which possesses anticancer properties. The hydrophobicity of SCR7 decreases its bioavailability which is a major setback in the utilization of this compound as a therapeutic agent. In order to circumvent the drawback of SCR7, we prepared a polymer encapsulated form of SCR7. The physical interaction of SCR7 and Pluronic (R) copolymer is evident from different analytical techniques. The in vitro cytotoxicity of the drug formulations is established using the MTT assay.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.