161 resultados para extractor
Resumo:
Essential oils of ripe fruits from Schinus terebinthifolius (Anacardiaceae), obtained using a pilot extractor and a Clevenger apparatus were chemically characterized. Due the high amount of (-)-alpha-pinene in both oils, this monoterpene was tested against the protozoan parasite Trypanosoma cruzi, showing a moderate potential (IC50 63.56 mu g/mL) when compared to benznidazole (IC50 43.14 mu g/mL). Otherwise, (-)-alpha-pinene oxide did not showed anti-trypanosomal activity (IC50 > 400 mu g/mL) while (-)-pinane showed an IC50 of 56.50 mu g/mL. The obtained results indicated that the epoxydation of a-pinene results to the loss of the anti-parasitic activity while its hydrogenation product, contributed slightly to the increased activity.
Resumo:
The liquid-liquid equilibria of systems composed of rice bran oil, free fatty acids, ethanol and water were investigated at temperatures ranging from 10 to 60 degrees C. The results of the present study indicated that the mutual solubility of the compounds decreased with an increase in the water content of the solvent and a decrease in the temperature of the solution. The experimental data set was correlated by applying the UNIQUAC model. The average variance between the experimental and calculated compositions was 0.35%, indicating that the model can accurately predict behavior of the compounds at different temperatures and degrees of hydration. The adjustment of interaction parameters enables both the simulation of liquid-liquid extractors for deacidification of vegetable oil and the prediction of phase compositions for the oil and alcohol-rich phases that are generated during cooling of the stream exiting the extractor (when using ethanol as the solvent). (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
An high performance liquid chromatography (HPLC) method for the enantioselective determination of donepezil (DPZ), 5-O-desmethyl donepezil (5-ODD), and 6-O-desmethyl donepezil (6-ODD) in Czapek culture medium to be applied to biotransformation studies with fungi is described for the first time. The HPLC analysis was carried out using a Chiralpak AD-H column with hexane/ethanol/methanol (75:20:5, v/v/v) plus 0.3 % triethylamine as mobile phase and UV detection at 270 nm. Sample preparation was carried out by liquid-liquid extraction using ethyl acetate as extractor solvent. The method was linear over the concentration range of 100-10,000 ng mL(-1) for each enantiomer of DPZ (r a parts per thousand yenaEuro parts per thousand 0.9985) and of 100-5,000 ng mL(-1) for each enantiomer of 5-ODD (r a parts per thousand yenaEuro parts per thousand 0.9977) and 6-ODD (r a parts per thousand yenaEuro parts per thousand 0.9951). Within-day and between-day precision and accuracy evaluated by relative standard deviations and relative errors, respectively, were lower than 15 % for all analytes. The validated method was used to assess DPZ biotransformation by the fungi Beauveria bassiana American Type Culture Collection (ATCC) 7159 and Cunninghamella elegans ATCC 10028B. Using the fungus B. bassiana ATCC 7159, a predominant formation of (R)-5-ODD was observed while for the fungus C. elegans ATCC 10028B, DPZ was biotransformed to (R)-6-ODD with an enantiomeric excess of 100 %.
Resumo:
Essential oils of ripe fruits from Schinus terebinthifolius (Anacardiaceae), obtained using a pilot extractor and a Clevenger apparatus were chemically characterized. Due the high amount of (-)- α-pinene in both oils, this monoterpene was tested against the protozoan parasite Trypanosoma cruzi, showing a moderate potential (IC50 63.56 µg/mL) when compared to benznidazole (IC50 43.14 µg/mL). Otherwise, (-)- α-pinene oxide did not showed anti-trypanosomal activity (IC50 > 400 µg/mL) while (-)-pinane showed an IC50 of 56.50 µg/mL. The obtained results indicated that the epoxydation of α-pinene results to the loss of the anti-parasitic activity while its hydrogenation product, contributed slightly to the increased activity.
Resumo:
The purpose of this investigation is to record data pertaining to porosity and permeability derived from physical tests performed on Devonian dolomites occurring in northwestern Montana, and to subsequently summarize the results obtained as an aid in determining the possibility of those dolomites being suitable reservoir rocks for oil and gas accumulation.
Resumo:
In situ and simultaneous measurement of the three most abundant isotopologues of methane using mid-infrared laser absorption spectroscopy is demonstrated. A field-deployable, autonomous platform is realized by coupling a compact quantum cascade laser absorption spectrometer (QCLAS) to a preconcentration unit, called trace gas extractor (TREX). This unit enhances CH4 mole fractions by a factor of up to 500 above ambient levels and quantitatively separates interfering trace gases such as N2O and CO2. The analytical precision of the QCLAS isotope measurement on the preconcentrated (750 ppm, parts-per-million, µmole mole−1) methane is 0.1 and 0.5 ‰ for δ13C- and δD-CH4 at 10 min averaging time. Based on repeated measurements of compressed air during a 2-week intercomparison campaign, the repeatability of the TREX–QCLAS was determined to be 0.19 and 1.9 ‰ for δ13C and δD-CH4, respectively. In this intercomparison campaign the new in situ technique is compared to isotope-ratio mass spectrometry (IRMS) based on glass flask and bag sampling and real time CH4 isotope analysis by two commercially available laser spectrometers. Both laser-based analyzers were limited to methane mole fraction and δ13C-CH4 analysis, and only one of them, a cavity ring down spectrometer, was capable to deliver meaningful data for the isotopic composition. After correcting for scale offsets, the average difference between TREX–QCLAS data and bag/flask sampling–IRMS values are within the extended WMO compatibility goals of 0.2 and 5 ‰ for δ13C- and δD-CH4, respectively. This also displays the potential to improve the interlaboratory compatibility based on the analysis of a reference air sample with accurately determined isotopic composition.
Resumo:
Se presenta la composición química de los extractos de bambú (Guadua angustifolia). Se realizó un muestreo aleatorio en el Jardín Botánico de la Universidad Tecnológica de Pereira (Colombia) de guaduas maduras y sobremaduras, combinando diafragma, nudo y entrenudo con cepa, basa y sobrebasa, obteniendo un total de 54 muestras. Las muestras se cortaron para obtener discos de unos 2-3 cm de altura, separando nudos, diafragmas y entrenudos. Las muestras trituradas se tamizan y se pesan alícuotas de 3-5 gramos para la extracción. Las extracciones se realizaron por ultrasonidos, con Soxhlet y con extractor Randall con los disolventes éter de petróleo 40-60 C, acetona, metanol y agua secuencialmente. Los extractos se analizaron por cromatografía de gases- espectrometría de masas y HPLC. El contenido total de extractos es del orden del 11,1% en los nudos, 16,5% en los entrenudos y 28,3% en los diafragmas. Entre los compuestos identificados se encuentran esteroles, vitamina E, hidrocarburos saturados, 4 hidroxi- 4 metil- 2 pentanona, neofitadieno, vitamina E, fenoles, aldehidos, los ácidos palmítico y linoleico y dietilenglicol.
Diseño de algoritmos de guerra electrónica y radar para su implementación en sistemas de tiempo real
Resumo:
Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.
Resumo:
Processos como a purificação do metano (CH4) e a produção de hidrogênio gasoso (H2) envolvem etapas de separação de CO2. Atualmente, etanolaminas como monoetanolamina (MEA), dietanolamina (DEA), metildietanolamina (MDEA) e trietanolamina (TEA) são as substâncias mais utilizadas no processo de separação/captura de CO2 em processos industriais. Entretanto, o uso destas substâncias apresenta alguns inconvenientes devido à alta volatilidade, dificuldade de se trabalhar com material líquido, também ao alto gasto energético envolvido das etapas de regeneração e à baixa estabilidade térmica e química. Com base nessa problemática, esse trabalho teve por objetivo a síntese de um tipo de sílica mesoporosa altamente ordenada (SBA-15) de modo a utilizá-la no processo de captura de CO2. O trabalho foi dividido em quatro etapas experimentais que envolveram a síntese da SBA-15, o estudo do comportamento térmico de algumas etanolaminas livres, síntese e caracterização de materiais adsorventes preparados a partir de incorporação de etanolaminas à SBA-15 e estudo da eficiência de captura de CO2 por esses materiais. Novas alternativas de síntese da SBA-15 foram estudadas neste trabalho, visando aperfeiçoar as propriedades texturais do material produzido. Tais alternativas são baseadas na remoção do surfatante, utilizado como molde na síntese da sílica mesoporosa, por meio da extração por Soxhlet, utilizando diferentes solventes. O processo contribuiu para melhorar as propriedades do material obtido, evitando o encolhimento da estrutura que pode ser ocasionado durante a etapa de calcinação. Por meio de técnicas como TG/DTG, DSC, FTIR e Análise Elementar de C, H e N foi realizada a caracterização físico-química e termoanalítica da MEA, DEA, MDEA e TEA, visando melhor conhecer as características destas substâncias. Estudos cinéticos baseados nos métodos termogravimétricos isotérmicos e não isotérmicos (Método de Ozawa) foram realizados, permitindo a determinação de parâmetros cinéticos envolvidos nas etapas de volatilização/decomposição térmica das etanolaminas. Além das técnicas acima mencionadas, MEV, MET, SAXS e Medidas de Adsorção de N2 foram utilizadas na caraterização da SBA-15 antes e após a incorporação das etanolaminas. Dentre as etanolaminas estudadas, a TEA apresentou maior estabilidade térmica, entretanto, devido ao seu maior impedimento estérico, é a etanolamina que apresenta menor afinidade com o CO2. Diferentemente das demais etanolaminas estudadas, a decomposição térmica da DEA envolve uma reação intramolecular, levando a formação de MEA e óxido de etileno. A incorporação destes materiais à SBA-15 aumentou a estabilidade térmica das etanolaminas, uma vez que parte do material permanece dentro dos poros da sílica. Os ensaios de adsorção de CO2 mostraram que a incorporação da MEA à SBA-15 catalisou o processo de decomposição térmica da mesma. A MDEA foi a etanolamina que apresentou maior poder de captura de CO2 e sua estabilidade térmica foi consideravelmente aumentada quando a mesma foi incorporada à SBA-15, aumentando também seu potencial de captura de CO2.
Resumo:
Presentation in the 11th European Symposium of the Working Party on Computer Aided Process Engineering, Kolding, Denmark, May 27-30, 2001.
Resumo:
En este trabajo se estudia el uso de las nubes de puntos en 3D, es decir, un conjunto de puntos en un sistema de referencia cartesiano en R3, para la identificación y caracterización de las discontinuidades que afloran en un macizo rocoso y su aplicación al campo de la Mecánica de Rocas. Las nubes de puntos utilizadas se han adquirido mediante tres técnicas: sintéticas, 3D laser scanner y la técnica de fotogrametría digital Structure From Motion (SfM). El enfoque está orientado a la extracción y caracterización de familias de discontinuidades y su aplicación a la evaluación de la calidad de un talud rocoso mediante la clasificación geomecánica Slope Mass Rating (SMR). El contenido de la misma está dividido en tres bloques, como son: (1) metodología de extracción de discontinuidades y clasificación de la nube de puntos 3D; (2) análisis de espaciados normales en nubes de puntos 3D; y (3) análisis de la evaluación de la calidad geomecánica de taludes rocoso mediante la clasificación geomecánica SMR a partir de nubes de puntos 3D. La primera línea de investigación consiste en el estudio de las nubes de puntos 3D con la finalidad de extraer y caracterizar las discontinuidades planas presentes en la superficie de un macizo rocoso. En primer lugar, se ha recopilado información de las metodologías existentes y la disponibilidad de programas para su estudio. Esto motivó la decisión de investigar y diseñar un proceso de clasificación novedoso, que muestre todos los pasos para su programación e incluso ofreciendo el código programado a la comunidad científica bajo licencia GNU GPL. De esta forma, se ha diseñado una novedosa metodología y se ha programado un software que analiza nubes de puntos 3D de forma semi-automática, permitiendo al usuario interactuar con el proceso de clasificación. Dicho software se llama Discontinuity Set Extractor (DSE). El método se ha validado empleando nubes de puntos sintéticas y adquiridas con 3D laser scanner. En primer lugar, este código analiza la nube de puntos efectuando un test de coplanaridad para cada punto y sus vecinos próximos para, a continuación, calcular el vector normal de la superficie en el punto estudiado. En segundo lugar, se representan los polos de los vectores normales calculados en el paso previo en una falsilla estereográfica. A continuación se calcula la densidad de los polos y los polos con mayor densidad o polos principales. Estos indican las orientaciones de la superficie más representadas, y por tanto las familias de discontinuidades. En tercer lugar, se asigna a cada punto una familia en dependencia del ángulo formado por el vector normal del punto y el de la familia. En este punto el usuario puede visualizar la nube de puntos clasificada con las familias de discontinuidades que ha determinado para validar el resultado intermedio. En cuarto lugar, se realiza un análisis cluster en el que se determina la agrupación de puntos según planos para cada familia (clusters). A continuación, se filtran aquellos que no tengan un número de puntos suficiente y se determina la ecuación de cada plano. Finalmente, se exportan los resultados de la clasificación a un archivo de texto para su análisis y representación en otros programas. La segunda línea de investigación consiste en el estudio del espaciado entre discontinuidades planas que afloran en macizos rocosos a partir de nubes de puntos 3D. Se desarrolló una metodología de cálculo de espaciados a partir de nubes de puntos 3D previamente clasificadas con el fin de determinar las relaciones espaciales entre planos de cada familia y calcular el espaciado normal. El fundamento novedoso del método propuesto es determinar el espaciado normal de familia basándonos en los mismos principios que en campo, pero sin la restricción de las limitaciones espaciales, condiciones de inseguridad y dificultades inherentes al proceso. Se consideraron dos aspectos de las discontinuidades: su persistencia finita o infinita, siendo la primera el aspecto más novedoso de esta publicación. El desarrollo y aplicación del método a varios casos de estudio permitió determinar su ámbito de aplicación. La validación se llevó a cabo con nubes de puntos sintéticas y adquiridas con 3D laser scanner. La tercera línea de investigación consiste en el análisis de la aplicación de la información obtenida con nubes de puntos 3D a la evaluación de la calidad de un talud rocoso mediante la clasificación geomecánica SMR. El análisis se centró en la influencia del uso de orientaciones determinadas con distintas fuentes de información (datos de campo y técnicas de adquisición remota) en la determinación de los factores de ajuste y al valor del índice SMR. Los resultados de este análisis muestran que el uso de fuentes de información y técnicas ampliamente aceptadas pueden ocasionar cambios en la evaluación de la calidad del talud rocoso de hasta una clase geomecánica (es decir, 20 unidades). Asimismo, los análisis realizados han permitido constatar la validez del índice SMR para cartografiar zonas inestables de un talud. Los métodos y programas informáticos desarrollados suponen un importante avance científico para el uso de nubes de puntos 3D para: (1) el estudio y caracterización de las discontinuidades de los macizos rocosos y (2) su aplicación a la evaluación de la calidad de taludes en roca mediante las clasificaciones geomecánicas. Asimismo, las conclusiones obtenidas y los medios y métodos empleados en esta tesis doctoral podrán ser contrastadas y utilizados por otros investigadores, al estar disponibles en la web del autor bajo licencia GNU GPL.
Resumo:
The semantic localization problem in robotics consists in determining the place where a robot is located by means of semantic categories. The problem is usually addressed as a supervised classification process, where input data correspond to robot perceptions while classes to semantic categories, like kitchen or corridor. In this paper we propose a framework, implemented in the PCL library, which provides a set of valuable tools to easily develop and evaluate semantic localization systems. The implementation includes the generation of 3D global descriptors following a Bag-of-Words approach. This allows the generation of fixed-dimensionality descriptors from any type of keypoint detector and feature extractor combinations. The framework has been designed, structured and implemented to be easily extended with different keypoint detectors, feature extractors as well as classification models. The proposed framework has also been used to evaluate the performance of a set of already implemented descriptors, when used as input for a specific semantic localization system. The obtained results are discussed paying special attention to the internal parameters of the BoW descriptor generation process. Moreover, we also review the combination of some keypoint detectors with different 3D descriptor generation techniques.
Resumo:
A study of the hydrodynamics and mass transfer characteristics of a liquid-liquid extraction process in a 450 mm diameter, 4.30 m high Rotating Disc Contactor (R.D.C.) has been undertaken. The literature relating to this type of extractor and the relevant phenomena, such as droplet break-up and coalescence, drop mass transfer and axial mixing has been revjewed. Experiments were performed using the system C1airsol-350-acetone-water and the effects of drop size, drop size-distribution and dispersed phase hold-up on the performance of the R.D.C. established. The results obtained for the two-phase system C1airso1-water have been compared with published correlations: since most of these correlations are based on data obtained from laboratory scale R.D.C.'s, a wide divergence was found. The hydrodynamics data from this study have therefore been correlated to predict the drop size and the dispersed phase hold-up and agreement has been obtained with the experimental data to within +8% for the drop size and +9% for the dispersed phase hold-up. The correlations obtained were modified to include terms involving column dimensions and the data have been correlated with the results obtained from this study together with published data; agreement was generally within +17% for drop size and within +14% for the dispersed phase hold-up. The experimental drop size distributions obtained were in excellent agreement with the upper limit log-normal distributions which should therefore be used in preference to other distribution functions. In the calculation of the overall experimental mass transfer coefficient the mean driving force was determined from the concentration profile along the column using Simpson's Rule and a novel method was developed to calculate the overall theoretical mass transfer coefficient Kca1, involving the drop size distribution diagram to determine the volume percentage of stagnant, circulating and oscillating drops in the sample population. Individual mass transfer coefficients were determined for the corresponding droplet state using different single drop mass transfer models. Kca1 was then calculated as the fractional sum of these individual coefficients and their proportions in the drop sample population. Very good agreement was found between the experimental and theoretical overall mass transfer coefficients. Drop sizes under mass transfer conditions were strongly dependant upon the direction of mass transfer. Drop Sizes in the absence of mass transfer were generally larger than those with solute transfer from the continuous to the dispersed phase, but smaller than those with solute transfer in the opposite direction at corresponding phase flowrates and rotor speed. Under similar operating conditions hold-up was also affected by mass transfer; it was higher when solute transfered from the continuous to the dispersed phase and lower when direction was reversed compared with non-mass transfer operation.
Resumo:
2000 Mathematics Subject Classification: 62H30
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^