976 resultados para Standard method
Resumo:
La digestion enzymatique des protéines est une méthode de base pour les études protéomiques ainsi que pour le séquençage en mode « bottom-up ». Les enzymes sont ajoutées soit en solution (phase homogène), soit directement sur le gel polyacrylamide selon la méthode déjà utilisée pour l’isolation de la protéine. Les enzymes protéolytiques immobilisées, c’est-à-dire insolubles, offrent plusieurs avantages tels que la réutilisation de l’enzyme, un rapport élevé d’enzyme-sur-substrat, et une intégration facile avec les systèmes fluidiques. Dans cette étude, la chymotrypsine (CT) a été immobilisée par réticulation avec le glutaraldehyde (GA), ce qui crée des particules insolubles. L’efficacité d’immobilisation, déterminée par spectrophotométrie d’absorbance, était de 96% de la masse totale de la CT ajouté. Plusieurs différentes conditions d’immobilisation (i.e., réticulation) tels que la composition/pH du tampon et la masse de CT durant la réticulation ainsi que les différentes conditions d’entreposage tels que la température, durée et humidité pour les particules GA-CT ont été évaluées par comparaison des cartes peptidiques en électrophorèse capillaire (CE) des protéines standards digérées par les particules. Les particules de GA-CT ont été utilisés pour digérer la BSA comme exemple d’une protéine repliée large qui requit une dénaturation préalable à la digestion, et pour digérer la caséine marquée avec de l’isothiocyanate de fluorescéine (FITC) comme exemple d’un substrat dérivé afin de vérifier l’activité enzymatique du GA-CT dans la présence des groupements fluorescents liés au substrat. La cartographie peptidique des digestions par les particules GA-CT a été réalisée par CE avec la détection par absorbance ultraviolet (UV) ou fluorescence induite par laser. La caséine-FITC a été, en effet, digérée par GA-CT au même degré que par la CT libre (i.e., soluble). Un microréacteur enzymatique (IMER) a été fabriqué par immobilisation de la CT dans un capillaire de silice fondu du diamètre interne de 250 µm prétraité avec du 3-aminopropyltriéthoxysilane afin de fonctionnaliser la paroi interne avec les groupements amines. Le GA a été réagit avec les groupements amine puis la CT a été immobilisée par réticulation avec le GA. Les IMERs à base de GA-CT étaient préparé à l’aide d’un système CE automatisé puis utilisé pour digérer la BSA, la myoglobine, un peptide ayant 9 résidus et un dipeptide comme exemples des substrats ayant taille large, moyenne et petite, respectivement. La comparaison des cartes peptidiques des digestats obtenues par CE-UV ou CE-spectrométrie de masse nous permettent d’étudier les conditions d’immobilisation en fonction de la composition et le pH du tampon et le temps de réaction de la réticulation. Une étude par microscopie de fluorescence, un outil utilisé pour examiner l’étendue et les endroits d’immobilisation GA-CT dans l’IMER, ont montré que l’immobilisation a eu lieu majoritairement sur la paroi et que la réticulation ne s’est étendue pas si loin au centre du capillaire qu’anticipée.
Resumo:
La Fibrose kystique (FK) est une maladie génétique qui se traduit par une destruction progressive des poumons et éventuellement, à la mort. La principale complication secondaire est le diabète associé à la FK (DAFK). Une dégradation clinique (perte de poids et de la fonction pulmonaire) accélérée est observée avant le diagnostic. L’objectif principal de mon projet de doctorat est de déterminer, par l’intermédiaire du test d’hyperglycémie provoquée par voie orale (HGPO), s’il existe un lien entre l’hyperglycémie et/ou l’hypoinsulinémie et la dégradation clinique observée avant le diagnostic du DAFK. Nous allons ainsi évaluer l’importance des temps intermédiaires de l’HGPO afin de simplifier le diagnostic d’une dysglycémie ainsi que d’établir des nouveaux marqueurs indicateurs de patients à risque d’une détérioration clinique. L’HGPO est la méthode standard utilisée dans la FK pour le diagnostic du DAFK. Nous avons démontré que les valeurs de glycémie obtenues au temps 90-min de l’HGPO seraient suffisantes pour prédire la tolérance au glucose des patients adultes avec la FK, autrement établie à l’aide des valeurs à 2-h de l’HGPO. Nous proposons des glycémies à 90-min de l’HGPO supérieure à 9.3 mmol/L et supérieure à 11.5 mmol/L pour détecter l’intolérance au glucose et le DAFK, respectivement. Une cause importante du DAFK est un défaut de la sécrétion d’insuline. Les femmes atteintes de la FK ont un risque plus élevé de développer le DAFK que les hommes, nous avons donc exploré si leur sécrétion était altérée. Contrairement à notre hypothèse, nous avons observé que les femmes avec la FK avaient une sécrétion d’insuline totale plus élevée que les hommes avec la FK, mais à des niveaux comparables aux femmes en santé. Le groupe de tolérance au glucose récemment proposé et nommé indéterminé (INDET : 60-min HGPO > 11.0 mais 2h-HGPO <7.8mmol/L) est à risque élevé de développer le DAFK. Par contre, les caractéristiques cliniques de ce groupe chez les patients adultes avec la FK n’ont pas été établies. Nous avons observé que le groupe INDET a une fonction pulmonaire réduite et similaire au groupe DAFK de novo et aucun des paramètres glucidiques et insulinémiques expliqueraient cette observation. Dans une population pédiatrique de patients avec la FK, une association a été rapportée entre une glycémie élevée à 60-min de l’HGPO et une fonction pulmonaire diminuée. Dans notre groupe de patients adultes avec la FK, il existe une association négative entre la glycémie à 60-min de l’HGPO et la fonction pulmonaire et une corrélation positive entre l’insulinémie à 60-min de l’HGPO et l’indice de masse corporelle (IMC). De plus, les patients avec une glycémie à 60-min HGPO > 11.0 mmol/L ont une fonction pulmonaire diminuée et une sensibilité à l’insuline basse alors que ceux avec une insulinémie à 60-min HGPO < 43.4 μU/mL ont un IMC ainsi qu’une fonction pulmonaire diminués. En conclusion, nous sommes le premier groupe à démontrer que 1) le test d’HGPO peut être raccourci de 30 min sans compromettre la catégorisation de la tolérance au glucose, 2) les femmes avec la FK démontrent une préservation de leur sécrétion de l’insuline, 3) le groupe INDET présente des anomalies précoces de la fonction pulmonaire comparable au groupe DAFK de novo et 4) la glycémie et l’insuline à la première heure de l’HGPO sont associées aux deux éléments clefs de la dégradation clinique. Il est crucial d’élucider les mécanismes pathophysiologiques importants afin de mieux prévoir la survenue de la dégradation clinique précédant le DAFK.
Resumo:
The fabrication and analytical applications of two types of potentiometric sensors for the determination of ketoconazole (KET) are described. The sensors are based on the use of KET-molybdophosphoric acid (MPA) ion pair as electroactive material. The fabricated sensors include both polymer membrane and carbon paste electrodes. Both sensors showed a linear, stable and near Nernstian slope of 57.8mV=decade and 55.2mV=decade for PVC membrane and carbon paste sensors respectively over a relatively wide range of KET concentration (1×10-2-5×10-5 and 1×10-2-1×10-6). The sensors showed a fast response time of <30 sec and <45 sec. A useful pH range of 3–6 was obtained for both types of sensors. A detection limit of 2.96 10 5M was obtained for PVC membrane sensor and 6.91 10 6M was obtained for carbon paste sensor. The proposed sensors proved to have a good selectivity for KET with respect to a large number of ions. The proposed sensors were successfully applied for the determination of KET in pharmaceutical formulations. The results obtained are in good agreement with the values obtained by the standard method.
Resumo:
Material synthesizing and characterization has been one of the major areas of scientific research for the past few decades. Various techniques have been suggested for the preparation and characterization of thin films and bulk samples according to the industrial and scientific applications. Material characterization implies the determination of the electrical, magnetic, optical or thermal properties of the material under study. Though it is possible to study all these properties of a material, we concentrate on the thermal and optical properties of certain polymers. The thermal properties are detennined using photothermal beam deflection technique and the optical properties are obtained from various spectroscopic analyses. In addition, thermal properties of a class of semiconducting compounds, copper delafossites, arc determined by photoacoustic technique.Photothermal technique is one of the most powerful tools for non-destructive characterization of materials. This forms a broad class of technique, which includes laser calorimetry, pyroelectric technique, photoacollstics, photothermal radiometric technique, photothermal beam deflection technique etc. However, the choice of a suitable technique depends upon the nature of sample and its environment, purpose of measurement, nature of light source used etc. The polynler samples under the present investigation are thermally thin and optically transparent at the excitation (pump beam) wavelength. Photothermal beam deflection technique is advantageous in that it can be used for the detennination of thermal diffusivity of samples irrespective of them being thermally thick or thennally thin and optically opaque or optically transparent. Hence of all the abovementioned techniques, photothemlal beam deflection technique is employed for the successful determination of thermal diffusivity of these polymer samples. However, the semi conducting samples studied are themlally thick and optically opaque and therefore, a much simpler photoacoustic technique is used for the thermal characterization.The production of polymer thin film samples has gained considerable attention for the past few years. Different techniques like plasma polymerization, electron bombardment, ultra violet irradiation and thermal evaporation can be used for the preparation of polymer thin films from their respective monomers. Among these, plasma polymerization or glow discharge polymerization has been widely lIsed for polymer thin fi Im preparation. At the earlier stages of the discovery, the plasma polymerization technique was not treated as a standard method for preparation of polymers. This method gained importance only when they were used to make special coatings on metals and began to be recognized as a technique for synthesizing polymers. Thc well-recognized concept of conventional polymerization is based on molecular processcs by which thc size of the molecule increases and rearrangemcnt of atoms within a molecule seldom occurs. However, polymer formation in plasma is recognized as an atomic process in contrast to the above molecular process. These films are pinhole free, highly branched and cross linked, heat resistant, exceptionally dielectric etc. The optical properties like the direct and indirect bandgaps, refractive indices etc of certain plasma polymerized thin films prepared are determined from the UV -VIS-NIR absorption and transmission spectra. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer. The thermal diffusivity has been measured using the photothermal beam deflection technique as stated earlier. This technique measures the refractive index gradient established in the sample surface and in the adjacent coupling medium, by passing another optical beam (probe beam) through this region and hence the name probe beam deflection. The deflection is detected using a position sensitive detector and its output is fed to a lock-in-amplifIer from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the deflection signal is suitably analyzed for determining the thermal diffusivity.Another class of compounds under the present investigation is copper delafossites. These samples in the form of pellets are thermally thick and optically opaque. Thermal diffusivity of such semiconductors is investigated using the photoacoustic technique, which measures the pressure change using an elcctret microphone. The output of the microphone is fed to a lock-in-amplificr to obtain the amplitude and phase from which the thermal properties are obtained. The variation in thermal diffusivity with composition is studied.
Resumo:
The fabrication and electrochemical response characteristics of two novel potentiometric sensors for the selective determination of domperidone (DOM) are described. The two fabricated sensors incorporate DOM–PTA (phosphotungstic acid) ion pair as the electroactive material. The sensors include a PVC membrane sensor and a carbon paste sensor. The sensors showed a linear, stable, and near Nernstian slope of 56.5 and 57.8 mV/decade for PVC membrane and carbon paste sensors, respectively over a relatively wide range of DOM concentration (1.0 9 10-1–1.0 9 10-5 and 1.0 9 10-1–3.55 9 10-6 M). The response time of DOM–PTA membrane sensor was less than 25 s and that in the case of carbon paste sensor was less than 20 s.Auseful pH range of 4–6 was obtained for both types of sensors. A detection limit of 7.36 9 10-5 M was obtained for PVC membrane sensor and 1.0 9 10-6 M was obtained for carbon paste sensor. The proposed sensors showed very good selectivity toDOMin the presence of a large number of other interfering ions. The analytical application of the developed sensors in the determination of the drug in pharmaceutical formulations such as tablets was investigated. The results obtained are in good agreement with the values obtained by the standard method. The sensors were also applied for the determination ofDOMin real samples such as urine by the standard addition method.
Resumo:
In this study, a novel improved technology could be developed to convert the recalcitrant coir pith into environmental friendly organic manure. The standard method of composting involves the substitution of urea with nitrogen fixing bacteria viz. Azotobacter vinelandii and Azospirillum brasilense leading to the development of an improved method of coir pith. The combined action of the microorganisms could enhance the biodegradation of coir pith. In the present study, Pleurotus sajor caju, an edible mushroom which has the ability to degrade coir pith, and the addition of nitrogen fixing bacteria like Azotobacter vinelandii and Azospirillum brasilense could accelerate the action of the fungi on coir pith. The use of these microorganisms brings about definite changes in the NPK, Ammonia, Organic Carbon and Lignin contents in coir pith. This study will encourage the use of biodegraded coir pith as organic manure for agri/horti purpose to get better yields and can serve as a better technology to solve the problem of accumulated coir pith in coir based industries
Resumo:
In [4], Guillard and Viozat propose a finite volume method for the simulation of inviscid steady as well as unsteady flows at low Mach numbers, based on a preconditioning technique. The scheme satisfies the results of a single scale asymptotic analysis in a discrete sense and comprises the advantage that this can be derived by a slight modification of the dissipation term within the numerical flux function. Unfortunately, it can be observed by numerical experiments that the preconditioned approach combined with an explicit time integration scheme turns out to be unstable if the time step Dt does not satisfy the requirement to be O(M2) as the Mach number M tends to zero, whereas the corresponding standard method remains stable up to Dt=O(M), M to 0, which results from the well-known CFL-condition. We present a comprehensive mathematical substantiation of this numerical phenomenon by means of a von Neumann stability analysis, which reveals that in contrast to the standard approach, the dissipation matrix of the preconditioned numerical flux function possesses an eigenvalue growing like M-2 as M tends to zero, thus causing the diminishment of the stability region of the explicit scheme. Thereby, we present statements for both the standard preconditioner used by Guillard and Viozat [4] and the more general one due to Turkel [21]. The theoretical results are after wards confirmed by numerical experiments.
Resumo:
In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.
Resumo:
The 11-yr solar cycle temperature response to spectrally resolved solar irradiance changes and associated ozone changes is calculated using a fixed dynamical heating (FDH) model. Imposed ozone changes are from satellite observations, in contrast to some earlier studies. A maximum of 1.6 K is found in the equatorial upper stratosphere and a secondary maximum of 0.4 K in the equatorial lower stratosphere, forming a double peak in the vertical. The upper maximum is primarily due to the irradiance changes while the lower maximum is due to the imposed ozone changes. The results compare well with analyses using the 40-yr ECMWF Re-Analysis (ERA-40) and NCEP/NCAR datasets. The equatorial lower stratospheric structure is reproduced even though, by definition, the FDH calculations exclude dynamically driven temperature changes, suggesting an important role for an indirect dynamical effect through ozone redistribution. The results also suggest that differences between the Stratospheric Sounding Unit (SSU)/Microwave Sounding Unit (MSU) and ERA-40 estimates of the solar cycle signal can be explained by the poor vertical resolution of the SSU/MSU measurements. The adjusted radiative forcing of climate change is also investigated. The forcing due to irradiance changes was 0.14 W m−2, which is only 78% of the value obtained by employing the standard method of simple scaling of the total solar irradiance (TSI) change. The difference arises because much of the change in TSI is at wavelengths where ozone absorbs strongly. The forcing due to the ozone change was only 0.004 W m−2 owing to strong compensation between negative shortwave and positive longwave forcings.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.
Resumo:
The dynamics of Northern Hemisphere major midwinter stratospheric sudden warmings (SSWs) are examined using transient climate change simulations from the Canadian Middle Atmosphere Model (CMAM). The simulated SSWs show good overall agreement with reanalysis data in terms of composite structure, statistics, and frequency. Using observed or model sea surface temperatures (SSTs) is found to make no significant difference to the SSWs, indicating that the use of model SSTs in the simulations extending into the future is not an issue. When SSWs are defined by the standard (wind based) definition, an absolute criterion, their frequency is found to increase by;60% by the end of this century, in conjunction with a;25% decrease in their temperature amplitude. However, when a relative criterion based on the northern annular mode index is used to define the SSWs, no future increase in frequency is found. The latter is consistent with the fact that the variance of 100-hPa daily heat flux anomalies is unaffected by climate change. The future increase in frequency of SSWs using the standard method is a result of the weakened climatological mean winds resulting from climate change, which make it easier for the SSW criterion to be met. A comparison of winters with and without SSWs reveals that the weakening of the climatological westerlies is not a result of SSWs. The Brewer–Dobson circulation is found to be stronger by ;10% during winters with SSWs, which is a value that does not change significantly in the future.
Resumo:
An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.
Resumo:
The structural and electronic properties of perylene diimide liquid crystal PPEEB are studied using ab initio methods based on the density functional theory (I)FT). Using available experimental crystallographic data as a guide, we propose a detailed structural model for the packing of solid PPEEB. We find that due to the localized nature of the band edge wave function, theoretical approaches beyond the standard method, such as hybrid functional (PBE0), are required to correctly characterize the band structure of this material. Moreover, unlike previous assumptions, we observe the formation of hydrogen bonds between the side chains of different molecules, which leads to a dispersion of the energy levels. This result indicates that the side chains of the molecular crystal not only are responsible for its structural conformation but also can be used for tuning the electronic and optical properties of these materials.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.