977 resultados para Label-free techniques
Resumo:
Background: In ∼5% of advanced NSCLC tumours, ALK tyrosine kinase is constitutively activated after translocation of ALK. ALK+ NSCLC was shown to be highly sensitive to the first approved ALK inhibitor, crizotinib. However, all pts eventually relapse on crizotinib mainly due to secondary ALK mutations/amplification or CNS metastases. Alectinib is a highly selective, potent, oral next-generation ALK inhibitor. Clinical phase II alectinib data in 46 crizotinib-naïve pts with ALK+ NSCLC reported an objective response rate (ORR) of 93.5% and a 1-year progression-free rate of 83% (95% CI: 68-92) (Inoue et al. J Thorac Oncol 2013). CNS activity was seen: of 14 pts with baseline brain metastasis, 11 had prior CNS radiation, 9 of these experienced CNS and systemic PFS of >12 months; of the 3 pts without prior CNS radiation, 2 were >15 months progression free. Trial design: Randomised, multicentre, phase III, open-label study in pts with treatment-naïve ALK+ advanced, recurrent, or metastatic NSCLC. All pts must provide pretreatment tumour tissue to confirm ALK rearrangement (by IHC). Pts (∼286 from ∼180 centres, ∼30 countries worldwide) will be randomised to alectinib (600mg oral bid, with food) or crizotinib (250mg oral bid, with/without food) until disease progression (PD), unacceptable toxicity, withdrawal of consent, or death. Stratification factors are: ECOG PS (0/1 vs 2), race (Asian vs non-Asian), baseline CNS metastases (yes vs no). Primary endpoint: PFS by investigators (RECIST v1.1). Secondary endpoints: PFS by Independent Review Committee (IRC); ORR; duration of response; OS; safety; pharmacokinetics; quality of life. Additionally, time to CNS progression will be evaluated (MRI) for the first time in a prospective randomised NSCLC trial as a secondary endpoint. Pts with isolated asymptomatic CNS progression will be allowed to continue treatment beyond documented progression until systemic PD and/or symptomatic CNS progression, according to investigator opinion. Time to CNS progression will be retrospectively assessed by the IRC using two separate criteria, RECIST and RANO. Further details: ClinicalTrials.gov (NCT02075840). Disclosure: T.S.K. Mok: Advisory boards: AZ, Roche, Eli Lilly, Merck Serono, Eisai, BMS, AVEO, Pfizer, Taiho, Boehringer Ingelheim, Novartis, GSK Biologicals, Clovis Oncology, Amgen, Janssen, BioMarin; board of directors: IASLC; corporate sponsored research: AZ; M. Perol: Advisory boards: Roche; S.I. Ou: Consulting: Pfizer, Chugai, Genentech Speaker Bureau: Pfizer, Genentech, Boehringer Ingelheim; I. Bara: Employee: F. Hoffmann-La Roche Ltd; V. Henschel: Employee and stock: F. Hoffmann-La Roche Ltd.; D.R. Camidge: Honoraria: Roche/Genentech. All other authors have declared no conflicts of interest.
Resumo:
Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state-of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
The aim of this study was to investigate the occurrence of Toxoplasma gondii and compare the results obtained in the Modified Agglutination Test (MAT), Polimerase Chain Reaction (PCR) and bioassay in mice. In order to accomplish this, 40 free-range chickens from eight farms in neighboring areas to the Pantanal in Nhecolândia, Mato Grosso do Sul, were euthanized and blood samples, brain and heart were collected. The occurrence of anti-T. gondii antibodies found in chickens was 67.5% (27 samples), considering as a cutoff point the dilution 1:5. Among the samples analyzed, 7 (25.9%) were positive in the dilution 1:5, 3 (11.1%) in 1:10, 2 (7.4%) in 1:20, 3 (11.1%) in 1:320, 1 ( 3.7%) in 1:640, 3 (11.1%) in 1:1280, 2 (7.4%) in 1:2560, 4 (14.8%) in 1:5120 and 2 (7.4%) in 1:10.240. From the mixture of tissue samples (brain and heart) from the chickens analyzed, 16 (40%) presented electrophoretic bands compatible with T. gondii by PCR (gene B1). In the comparison of techniques, 59.26% positivity in PCR was revealed among animals that were seropositive in MAT (cutoff 1:5). From 141 inoculated mice, six (4.44%) died of acute toxoplasmosis between 15 and 23 days after inoculation. Surviving mice were sacrificed at 74 days after inoculation, and a total of 28 cysts were found in the brains of 10 distinct groups. From the seropositive hens, 27 bioassays were performed and 11 (40.7%) isolates were obtained. A greater number of isolations happened in mice that were inoculated with tissues from chickens that had high titers for anti-T. gondii antibodies. Chronic infection in mice was observed in nine groups (33.3%) from five different properties. Among the surviving mice, 25.6% were positive for T. gondii in MAT (1:25). From mice positive in PCR, 87.5% were also positive in MAT. Among the PCR-negative mice, 5.2% were positive for T. gondii in MAT. It can be concluded through this study that the occurrence of infecton by T. gondii in the rural properties studied was high, that PCR directed to gene B1 does not confirm the viability of the parasite, but it can be used as a screening method for the selection of chickens infected by T. gondii, that the animals with titer greater than 10 must be prioritized for the selection of animals for bioassay, since for them, the chances of isolating the parasite are greater and that seroconversion in experimentally infected mice is not a good indicator for isolating the agent.
Resumo:
In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals
Resumo:
This paper was designed to evaluate the rancidity of 18 pet food samples using the Diamed FATS kits and official AOCS methods for the quantification of free fatty acids, peroxide value and concentrations of malonaldehyde and alkenal in the lipid extracted. Although expiration dates have passed, the samples presented good quality evidencing little oxidative rancidity. The results of this study suggest that the Brazilian pet food market is replete with products of excellent quality due to the competitiveness of this market sector.
Resumo:
The aim of this study was to determine the physical and microbiological characteristics of extruded broken beans flour, in addition to developing mixtures for gluten-free cake with these flours, evaluating their technological and sensory quality. Gluten-free formulations were prepared with 45%, 60% and 75% of extruded broken beans. All analyzes of the flours and mixtures for cakes were performed according to standard techniques found in the literature. Sensory analyzes of cakes applied the 9-point structured hedonic scale. Results were submitted to variance analysis and comparison of means test (Tukey, p<0.05). The use of extruded broken beans improved the water absorbed and water solubility index of the mixtures for gluten-free cake, and for the lower viscosity and retrogradation when compared to the standard formulation. All cakes were accepted (rate ≥ 7) for all the analyzed attributes. From the technological and sensory standpoints, the development of gluten-free cake mixtures is feasible with up to 75% of extruded broken beans.
Resumo:
L’entérotoxine B staphylococcique (SEB) est une toxine entérique hautement résistante à la chaleur et est responsable de plus de 50 % des cas d’intoxication d’origine alimentaire par une entérotoxine. L’objectif principal de ce projet de maîtrise est de développer et valider une méthode basée sur des nouvelles stratégies analytiques permettant la détection et la quantification de SEB dans les matrices alimentaires. Une carte de peptides tryptiques a été produite et 3 peptides tryptiques spécifiques ont été sélectionnés pour servir de peptides témoins à partir des 9 fragments protéolytiques identifiés (couverture de 35 % de la séquence). L’anhydride acétique et la forme deutérée furent utilisés afin de synthétiser des peptides standards marqués avec un isotope léger et lourd. La combinaison de mélanges des deux isotopes à des concentrations molaires différentes fut utilisée afin d’établir la linéarité et les résultats ont démontré que les mesures faites par dilution isotopique combinée au CL-SM/SM respectaient les critères généralement reconnus d’épreuves biologiques avec des valeurs de pente près de 1, des valeurs de R2 supérieure à 0,98 et des coefficients de variation (CV%) inférieurs à 8 %. La précision et l’exactitude de la méthode ont été évaluées à l’aide d’échantillons d’homogénat de viande de poulet dans lesquels SEB a été introduite. SEB a été enrichie à 0,2, 1 et 2 pmol/g. Les résultats analytiques révèlent que la méthode procure une plage d’exactitude de 84,9 à 91,1 %. Dans l’ensemble, les résultats présentés dans ce mémoire démontrent que les méthodes protéomiques peuvent être utilisées efficacement pour détecter et quantifier SEB dans les matrices alimentaires. Mots clés : spectrométrie de masse; marquage isotopique; protéomique quantitative; entérotoxines
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.
Resumo:
The thesis mainly focuses on material characterization in different environments: freely available samples taken in planar fonn, biological samples available in small quantities and buried objects.Free space method, finds many applications in the fields of industry, medicine and communication. As it is a non-contact method, it can be employed for monitoring the electrical properties of materials moving through a conveyor belt in real time. Also, measurement on such systems at high temperature is possible. NID theory can be applied to the characterization of thin films. Dielectric properties of thin films deposited on any dielectric substrate can be determined. ln chemical industry, the stages of a chemical reaction can be monitored online. Online monitoring will be more efficient as it saves time and avoids risk of sample collection.Dielectric contrast is one of the main factors, which decides the detectability of a system. lt could be noted that the two dielectric objects of same dielectric constant 3.2 (s, of plastic mine) placed in a medium of dielectric constant 2.56 (er of sand) could even be detected employing the time domain analysis of the reflected signal. This type of detection finds strategic importance as it provides solution to the problem of clearance of non-metallic mines. The demining of these mines using the conventional techniques had been proved futile. The studies on the detection of voids and leakage in pipes find many applications.The determined electrical properties of tissues can be used for numerical modeling of cells, microwave imaging, SAR test etc. All these techniques need the accurate determination of dielectric constant. ln the modem world, the use of cellular and other wireless communication systems is booming up. At the same time people are concemed about the hazardous effects of microwaves on living cells. The effect is usually studied on human phantom models. The construction of the models requires the knowledge of the dielectric parameters of the various body tissues. lt is in this context that the present study gains significance. The case study on biological samples shows that the properties of normal and infected body tissues are different. Even though the change in the dielectric properties of infected samples from that of normal one may not be a clear evidence of an ailment, it is an indication of some disorder.ln medical field, the free space method may be adapted for imaging the biological samples. This method can also be used in wireless technology. Evaluation of electrical properties and attenuation of obstacles in the path of RF waves can be done using free waves. An intelligent system for controlling the power output or frequency depending on the feed back values of the attenuation may be developed.The simulation employed in GPR can be extended for the exploration of the effects due to the factors such as the different proportion of water content in the soil, the level and roughness of the soil etc on the reflected signal. This may find applications in geological explorations. ln the detection of mines, a state-of-the art technique for scanning and imaging an active mine field can be developed using GPR. The probing antenna can be attached to a robotic arm capable of three degrees of rotation and the whole detecting system can be housed in a military vehicle. In industry, a system based on the GPR principle can be developed for monitoring liquid or gas through a pipe, as pipe with and without the sample gives different reflection responses. lt may also be implemented for the online monitoring of different stages of extraction and purification of crude petroleum in a plant.Since biological samples show fluctuation in the dielectric nature with time and other physiological conditions, more investigation in this direction should be done. The infected cells at various stages of advancement and the normal cells should be analysed. The results from these comparative studies can be utilized for the detection of the onset of such diseases. Studying the properties of infected tissues at different stages, the threshold of detectability of infected cells can be determined.
Resumo:
Lead free magneto electrics with a strong sub resonant (broad frequency range) magneto electric coupling coefficient (MECC) is the goal of the day which can revolutionise the microelectronics and microelectromechanical systems (MEMS) industry. We report giant resonant MECC in lead free nanograined Barium Titanate–CoFe (Alloy)-Barium Titanate [BTO-CoFe-BTO] sandwiched thin films. The resonant MECC values obtained here are the highest values recorded in thin films/ multilayers. Sub-resonant MECC values are quite comparable to the highest MECC reported in 2-2 layered structures. MECC got enhanced by two orders at a low frequency resonance. The results show the potential of these thin films for transducer, magnetic field assisted energy harvesters, switching devices, and storage applications. Some possible device integration techniques are also discussed
Resumo:
In now-a-days semiconductor and MEMS technologies the photolithography is the working horse for fabrication of functional devices. The conventional way (so called Top-Down approach) of microstructuring starts with photolithography, followed by patterning the structures using etching, especially dry etching. The requirements for smaller and hence faster devices lead to decrease of the feature size to the range of several nanometers. However, the production of devices in this scale range needs photolithography equipment, which must overcome the diffraction limit. Therefore, new photolithography techniques have been recently developed, but they are rather expensive and restricted to plane surfaces. Recently a new route has been presented - so-called Bottom-Up approach - where from a single atom or a molecule it is possible to obtain functional devices. This creates new field - Nanotechnology - where one speaks about structures with dimensions 1 - 100 nm, and which has the possibility to replace the conventional photolithography concerning its integral part - the self-assembly. However, this technique requires additional and special equipment and therefore is not yet widely applicable. This work presents a general scheme for the fabrication of silicon and silicon dioxide structures with lateral dimensions of less than 100 nm that avoids high-resolution photolithography processes. For the self-aligned formation of extremely small openings in silicon dioxide layers at in depth sharpened surface structures, the angle dependent etching rate distribution of silicon dioxide against plasma etching with a fluorocarbon gas (CHF3) was exploited. Subsequent anisotropic plasma etching of the silicon substrate material through the perforated silicon dioxide masking layer results in high aspect ratio trenches of approximately the same lateral dimensions. The latter can be reduced and precisely adjusted between 0 and 200 nm by thermal oxidation of the silicon structures owing to the volume expansion of silicon during the oxidation. On the basis of this a technology for the fabrication of SNOM calibration standards is presented. Additionally so-formed trenches were used as a template for CVD deposition of diamond resulting in high aspect ratio diamond knife. A lithography-free method for production of periodic and nonperiodic surface structures using the angular dependence of the etching rate is also presented. It combines the self-assembly of masking particles with the conventional plasma etching techniques known from microelectromechanical system technology. The method is generally applicable to bulk as well as layered materials. In this work, layers of glass spheres of different diameters were assembled on the sample surface forming a mask against plasma etching. Silicon surface structures with periodicity of 500 nm and feature dimensions of 20 nm were produced in this way. Thermal oxidation of the so structured silicon substrate offers the capability to vary the fill factor of the periodic structure owing to the volume expansion during oxidation but also to define silicon dioxide surface structures by selective plasma etching. Similar structures can be simply obtained by structuring silicon dioxide layers on silicon. The method offers a simple route for bridging the Nano- and Microtechnology and moreover, an uncomplicated way for photonic crystal fabrication.
In vitro cumulative gas production techniques: History, methodological considerations and challenges
Resumo:
Methodology used to measure in vitro gas production is reviewed to determine impacts of sources of variation on resultant gas production profiles (GPP). Current methods include measurement of gas production at constant pressure (e.g., use of gas tight syringes), a system that is inexpensive, but may be less sensitive than others thereby affecting its suitability in some situations. Automated systems that measure gas production at constant volume allow pressure to accumulate in the bottle, which is recorded at different times to produce a GPP, and may result in sufficiently high pressure that solubility of evolved gases in the medium is affected, thereby resulting in a recorded volume of gas that is lower than that predicted from stoichiometric calculations. Several other methods measure gas production at constant pressure and volume with either pressure transducers or sensors, and these may be manual, semi-automated or fully automated in operation. In these systems, gas is released as pressure increases, and vented gas is recorded. Agitating the medium does not consistently produce more gas with automated systems, and little or no effect of agitation was observed with manual systems. The apparatus affects GPP, but mathematical manipulation may enable effects of apparatus to be removed. The amount of substrate affects the volume of gas produced, but not rate of gas production, provided there is sufficient buffering capacity in the medium. Systems that use a very small amount of substrate are prone to experimental error in sample weighing. Effect of sample preparation on GPP has been found to be important, but further research is required to determine the optimum preparation that mimics animal chewing. Inoculum is the single largest source of variation in measuring GPP, as rumen fluid is variable and sampling schedules, diets fed to donor animals and ratios of rumen fluid/medium must be selected such that microbial activity is sufficiently high that it does not affect rate and extent of fermentation. Species of donor animal may also cause differences in GPP. End point measures can be mathematically manipulated to account for species differences, but rates of fermentation are not related. Other sources of inocula that have been used include caecal fluid (primarily for investigating hindgut fermentation in monogastrics), effluent from simulated rumen fermentation (e.g., 'Rusitec', which was as variable as rumen fluid), faeces, and frozen or freeze-dried rumen fluid (which were both less active than fresh rumen fluid). Use of mixtures of cell-free enzymes, or pure cultures of bacteria, may be a way of increasing GPP reproducibility, while reducing reliance on surgically modified animals. However, more research is required to develop these inocula. A number of media have been developed which buffer the incubation and provide relevant micro-nutrients to the microorganisms. To date, little research has been completed on relationships between the composition of the medium and measured GPP. However, comparing GPP from media either rich in N or N-free, allows assessment of contributions of N containing compounds in the sample. (c) 2005 Published by Elsevier B.V.