262 resultados para Soxhlet Extractor
Resumo:
There is scientific evidence demonstrating the benefits of mushrooms ingestion due to their richness in bioactive compounds such as mycosterols, in particular ergosterol [I]. Agaricus bisporus L. is the most consumed mushroom worldwide presenting 90% of ergosterol in its sterol fraction [2]. Thus, it is an interesting matrix to obtain ergosterol, a molecule with a high commercial value. According to literature, ergosterol concentration can vary between 3 to 9 mg per g of dried mushroom. Nowadays, traditional methods such as maceration and Soxhlet extraction are being replaced by emerging methodologies such as ultrasound (UAE) and microwave assisted extraction (MAE) in order to decrease the used solvent amount, extraction time and, of course, increasing the extraction yield [2]. In the present work, A. bisporus was extracted varying several parameters relevant to UAE and MAE: UAE: solvent type (hexane and ethanol), ultrasound amplitude (50 - 100 %) and sonication time (5 min-15 min); MAE: solvent was fixed as ethanol, time (0-20 min), temperature (60-210 •c) and solid-liquid ratio (1-20 g!L). Moreover, in order to decrease the process complexity, the pertinence to apply a saponification step was evaluated. Response surface methodology was applied to generate mathematical models which allow maximizing and optimizing the response variables that influence the extraction of ergosterol. Concerning the UAE, ethanol proved to be the best solvent to achieve higher levels of ergosterol (671.5 ± 0.5 mg/100 g dw, at 75% amplitude for 15 min), once hexane was only able to extract 152.2 ± 0.2 mg/100 g dw, in the same conditions. Nevertheless, the hexane extract showed higher purity (11%) when compared with the ethanol counterpart ( 4% ). Furthermore, in the case of the ethanolic extract, the saponification step increased its purity to 21%, while for the hexane extract the purity was similar; in fact, hexane presents higher selectivity for the lipophilic compounds comparatively with ethanol. Regarding the MAE technique, the results showed that the optimal conditions (19 ± 3 min, 133 ± 12 •c and 1.6 ± 0.5 g!L) allowed higher ergosterol extraction levels (556 ± 26 mg/100 g dw). The values obtained with MAE are close to the ones obtained with conventional Soxhlet extraction (676 ± 3 mg/100 g dw) and UAE. Overall, UAE and MAE proved to he efficient technologies to maximize ergosterol extraction yields.
Resumo:
Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.
Resumo:
La description des termes dans les ressources terminologiques traditionnelles se limite à certaines informations, comme le terme (principalement nominal), sa définition et son équivalent dans une langue étrangère. Cette description donne rarement d’autres informations qui peuvent être très utiles pour l’utilisateur, surtout s’il consulte les ressources dans le but d’approfondir ses connaissances dans un domaine de spécialité, maitriser la rédaction professionnelle ou trouver des contextes où le terme recherché est réalisé. Les informations pouvant être utiles dans ce sens comprennent la description de la structure actancielle des termes, des contextes provenant de sources authentiques et l’inclusion d’autres parties du discours comme les verbes. Les verbes et les noms déverbaux, ou les unités terminologiques prédicatives (UTP), souvent ignorés par la terminologie classique, revêtent une grande importance lorsqu’il s’agit d’exprimer une action, un processus ou un évènement. Or, la description de ces unités nécessite un modèle de description terminologique qui rend compte de leurs particularités. Un certain nombre de terminologues (Condamines 1993, Mathieu-Colas 2002, Gross et Mathieu-Colas 2001 et L’Homme 2012, 2015) ont d’ailleurs proposé des modèles de description basés sur différents cadres théoriques. Notre recherche consiste à proposer une méthodologie de description terminologique des UTP de la langue arabe, notamment l’arabe standard moderne (ASM), selon la théorie de la Sémantique des cadres (Frame Semantics) de Fillmore (1976, 1977, 1982, 1985) et son application, le projet FrameNet (Ruppenhofer et al. 2010). Le domaine de spécialité qui nous intéresse est l’informatique. Dans notre recherche, nous nous appuyons sur un corpus recueilli du web et nous nous inspirons d’une ressource terminologique existante, le DiCoInfo (L’Homme 2008), pour compiler notre propre ressource. Nos objectifs se résument comme suit. Premièrement, nous souhaitons jeter les premières bases d’une version en ASM de cette ressource. Cette version a ses propres particularités : 1) nous visons des unités bien spécifiques, à savoir les UTP verbales et déverbales; 2) la méthodologie développée pour la compilation du DiCoInfo original devra être adaptée pour prendre en compte une langue sémitique. Par la suite, nous souhaitons créer une version en cadres de cette ressource, où nous regroupons les UTP dans des cadres sémantiques, en nous inspirant du modèle de FrameNet. À cette ressource, nous ajoutons les UTP anglaises et françaises, puisque cette partie du travail a une portée multilingue. La méthodologie consiste à extraire automatiquement les unités terminologiques verbales et nominales (UTV et UTN), comme Ham~ala (حمل) (télécharger) et taHmiyl (تحميل) (téléchargement). Pour ce faire, nous avons adapté un extracteur automatique existant, TermoStat (Drouin 2004). Ensuite, à l’aide des critères de validation terminologique (L’Homme 2004), nous validons le statut terminologique d’une partie des candidats. Après la validation, nous procédons à la création de fiches terminologiques, à l’aide d’un éditeur XML, pour chaque UTV et UTN retenue. Ces fiches comprennent certains éléments comme la structure actancielle des UTP et jusqu’à vingt contextes annotés. La dernière étape consiste à créer des cadres sémantiques à partir des UTP de l’ASM. Nous associons également des UTP anglaises et françaises en fonction des cadres créés. Cette association a mené à la création d’une ressource terminologique appelée « DiCoInfo : A Framed Version ». Dans cette ressource, les UTP qui partagent les mêmes propriétés sémantiques et structures actancielles sont regroupées dans des cadres sémantiques. Par exemple, le cadre sémantique Product_development regroupe des UTP comme Taw~ara (طور) (développer), to develop et développer. À la suite de ces étapes, nous avons obtenu un total de 106 UTP ASM compilées dans la version en ASM du DiCoInfo et 57 cadres sémantiques associés à ces unités dans la version en cadres du DiCoInfo. Notre recherche montre que l’ASM peut être décrite avec la méthodologie que nous avons mise au point.
Resumo:
La amibiasis es una de las enfermedades parasitarias de mayor importancia en países subdesarrollados con climas tropicales y subtropicales, siendo México un país en el cual los datos epidemiológicos demuestran que el 20 % de la población se encuentra infectada por el protozoario Entamoeba histolytica. La principal quimioterapia para esta enfermedad se basa en el uso de imidazoles principalmente del Metronidazol el cual es la causa de numerosos efectos secundarios producidos tras su administración, por lo que es necesario obtener nuevos compuestos antiprotozoarios que ayuden a lograr mejores tratamientos. La medicina tradicional usada en el noreste de México menciona el empleo de numerosas plantas las cuales pueden ser empleadas para el tratamientos de desórdenes intestinales provocados por parásitos tanto protozoarios como helmintos. El objetivo de la presente investigación fue evaluar la actividad amebicida de 15 plantas medicinales empleadas en la medicina tradicional mexicana y aislar e identificar los compuestos con la principal actividad amebicida del extracto de la planta que presente el mayor porcentaje de inhibición sobre el crecimiento del parásito. Ruta chalepensis fue la planta con el mayor porcentaje de inhibición, se utilizaron 660 g de hojas de R. chalepensis los cuales fueron sometidos a extracción Soxhlet empleando metanol como solvente de extracción; después de eliminar el solvente, se evaluó la actividad amebicida del extracto metanólico y de sus particiones hexánica y de acetato de etilo. El extracto metanólico mostró una actividad amebicida de 90.50 % a 150 g/ml, en tanto la partición hexánica fue de 93.47 % y la partición de acetato de etilo de 84.82 % las cuales fueron evaluadas a la misma concentración de 150 g/ml, debido a que se obtuvieron porcentajes de inhibición sobresaliente en las dos particiones se realizó la separación cromatográfica de los componentes de ambas particiones. Del fraccionamiento cromatográfico se identificaron mediante diversas técnicas de RMN y espectrometría de masas los siguientes compuestos, una mezcla de psoraleno y bergapteno (IC50 de 57.09 g/ml), una mezcla de xantotoxina-isopimpinelina (IC50 de 26.22 g/ml), chalepensina (IC50 de 38.71 g/ml), graveolina, rutamarina (IC50 de 6.54 g/ml) y chalepina (IC50 de 28.67 g/ml). Como es posible observar el efecto amebicida de R. chalepensis está respaldado por la presencia de furanocumarinas.
Resumo:
Purpose: To evaluate the cytotoxic, anti-inflammatory and antioxidant activities of four different solvent extracts obtained from the aerial parts of Galega officinalis L Methods: The hexane, DCM, methanol and water extracts of G. officinalis were successively obtained by soxhlet extraction method. The cytotoxic activity of the extracts was assessed against human lung carcinoma (A-549), human colorectal adenocarcinoma (HT-29), human brain glioblastoma (U-87), and colon adenocarcinoma (DLD-1) by Resazurine test. The antioxidant activity of extracts were determined by Folin-Ciocalteau, oxygen radical absorbing capacity (ORAC), and 2’.7’-dichlorofluorescin-diacetate (DCFH-DA) cell-based assay while their anti-inflammatory activity was determined by nitric oxide (NO) assay. Results: DCM extract showed strong cytotoxic activity against lung adenocarcinoma and brain glioblastoma cell lines, with IC50 (concentration inhibiting 50 % of cell growth) values of 11 ± 0.4 and 16 ± 3 μg/mL, respectively. The hexane extract showed moderate anticancer activity against the same cell lines (59 ± 13 and 63 ± 16 μg/mL, respectively). DCM extract also showed significant anti-inflammatory activity, inhibiting NO release by 86.7 % at 40 μg/mL in lipopolysaccharide (LPS) - stimulated murine RAW 264.7 macrophages. Of all test extracts, the methanol extract of G. officinalis showed the highest antioxidant activity with 2.33 ± 0.09 μmol Trolox/mg , 7.10 ± 0.9 g tannic acid equivalent (TAE), and IC50 of 44 ± 4 μg/mL. Conclusion: The findings of this study suggest that DCM extract may possess anticancer effect against lung adenocarcinoma and brain glioblastoma, as well as serve as an anti-inflammatory agent.
Resumo:
The oily sludge is a complex mix of hydrocarbons, organic impurities, inorganic and water. One of the major problems currently found in petroleum industry is management (packaging, storage, transport and fate) of waste. The nanomaterials (catalysts) mesoporous and microporous are considered promising for refining and adsorbents process for environment protection. The aim of this work was to study the oily sludge from primary processing (raw and treated) and vacuum residue, with application of thermal analyses technique (pyrolysis), thermal and catalytic pyrolysis with nanomaterials, aiming at production petroleum derived. The sludge and vacuum residue were analyzed using a soxhlet extraction system, elemental analysis, thin layer chromatography, thermogravimetry and pyrolysis coupled in gas chromatography/mass spectrometry (Py GC MS). The catalysts AlMCM-41, AlSBA-15.1 e AlSBA-15.2 were synthesized with molar ratio silicon aluminum of 50 (Si/Al = 50), using tetraethylorthosilicante as source of silicon and pseudobuhemita (AlOOH) as source of aluminum. The analyzes of the catalysts indicate that materials showed hexagonal structure and surface area (783,6 m2/g for AlMCM-41, 600 m2/g for AlSBA-15.1, 377 m2/g for AlSBA-15.2). The extracted oily sludge showed a range 65 to 95% for organic components (oil), 5 to 35% for inorganic components (salts and oxides) and compositions different of derivatives. The AlSBA-15 catalysts showed better performance in analyzes for production petroleum derived, 20% increase in production of kerosene and light gas oil. The energy potential of sludge was high and it can be used as fuel in other cargo processed in refinery
Resumo:
This dissertation describes a deepening study about Visual Odometry problem tackled with transformer architectures. The existing VO algorithms are based on heavily hand-crafted features and are not able to generalize well to new environments. To train them, we need carefully fine-tune the hyper-parameters and the network architecture. We propose to tackle the VO problem with transformer because it is a general-purpose architecture and because it was designed to transformer sequences of data from a domain to another one, which is the case of the VO problem. Our first goal is to create synthetic dataset using BlenderProc2 framework to mitigate the problem of the dataset scarcity. The second goal is to tackle the VO problem by using different versions of the transformer architecture, which will be pre-trained on the synthetic dataset and fine-tuned on the real dataset, KITTI dataset. Our approach is defined as follows: we use a feature-extractor to extract features embeddings from a sequence of images, then we feed this sequence of embeddings to the transformer architecture, finally, an MLP is used to predict the sequence of camera poses.