914 resultados para reaction acceleration
Resumo:
With the projection of an increasing world population, hand-in-hand with a journey towards a bigger number of developed countries, further demand on basic chemical building blocks, as ethylene and propylene, has to be properly addressed in the next decades. The methanol-to-olefins (MTO) is an interesting reaction to produce those alkenes using coal, gas or alternative sources, like biomass, through syngas as a source for the production of methanol. This technology has been widely applied since 1985 and most of the processes are making use of zeolites as catalysts, particularly ZSM-5. Although its selectivity is not especially biased over light olefins, it resists to a quick deactivation by coke deposition, making it quite attractive when it comes to industrial environments; nevertheless, this is a highly exothermic reaction, which is hard to control and to anticipate problems, such as temperature runaways or hot-spots, inside the catalytic bed. The main focus of this project is to study those temperature effects, by addressing both experimental, where the catalytic performance and the temperature profiles are studied, and modelling fronts, which consists in a five step strategy to predict the weight fractions and activity. The mind-set of catalytic testing is present in all the developed assays. It was verified that the selectivity towards light olefins increases with temperature, although this also leads to a much faster catalyst deactivation. To oppose this effect, experiments were carried using a diluted bed, having been able to increase the catalyst lifetime between 32% and 47%. Additionally, experiments with three thermocouples placed inside the catalytic bed were performed, analysing the deactivation wave and the peaks of temperature throughout the bed. Regeneration was done between consecutive runs and it was concluded that this action can be a powerful means to increase the catalyst lifetime, maintaining a constant selectivity towards light olefins, by losing acid strength in a steam stabilised zeolitic structure. On the other hand, developments on the other approach lead to the construction of a raw basic model, able to predict weight fractions, that should be tuned to be a tool for deactivation and temperature profiles prediction.
Resumo:
Introduction Dogs play a primary role in the zoonotic cycle of visceral leishmaniasis (VL). Therefore, the accurate diagnosis of infected dogs, primarily asymptomatic dogs, is crucial to the efficiency of VL control programs. Methods We investigated the agreement of four diagnostic tests for canine visceral leishmaniasis (CVL): parasite detection, either after myeloculture or by direct microscopic examination of tissue imprints; kinetoplast-deoxyribonucleic acid-polymerase chain reaction (kDNA-PCR); and an immunochromatographic test (ICT). An enzyme-linked immunosorbent assay (ELISA) and an indirect immunofluorescence test (IFAT), both of which were adopted as part of the screening-culling program in Brazil, were used as reference tests. Our sample set consisted of 44 seropositive dogs, 25 of which were clinically asymptomatic and 19 were symptomatic for CVL according to ELISA-IFAT. Results The highest and lowest test co-positivities were observed for ICT (77.3%) and myeloculture (58.1%), respectively. When analyzed together, the overall percentage of co-positive tests was significantly higher for the symptomatic group compared to the asymptomatic group. However, only ICT was significantly different based on the results of a separate analysis per test for each group of dogs. The majority (93.8%) of animals exhibited at least one positive test result, with an average of 2.66 positive tests per dog. Half of the symptomatic dogs tested positive for all four tests administered. Conclusions The variability between test results reinforces the need for more efficient and reliable methods to accurately diagnose canine VL, particularly in asymptomatic animals.
Resumo:
INTRODUCTION: The present study was designed to assess the occurrence of co-infection or cross-reaction in the serological techniques used for detecting the anti-Leishmania spp., -Babesia canis vogeli and -Ehrlichia canis antibodies in urban dogs from an area endemic to these parasites. METHODS: The serum samples from dogs were tested for the Babesia canis vogeli strain Belo Horizonte antigen and Ehrlichia canis strain São Paulo by immunofluorescence antibody test (IFAT) and by anti-Leishmania immunoglobulin G (IgG) antibody detection to assess Leishmania infection. We used the following four commercial kits for canine visceral leishmaniasis: ELISA, IFAT, Dual Path Platform (DPP) (Bio Manguinhos(r)/FIOCRUZ/MS) and a rK39 RDT (Kalazar Detect Canine Rapid Test; Inbios). RESULTS : Of 96 serum samples submitted to serological assays, 4 (4.2%) were positive for Leishmania as determined by ELISA; 12 (12.5%), by IFAT; 14 (14.6%) by rK39 RDT; and 20 (20.8%), by DPP. Antibodies against Ehrlichia and Babesia were detected in 23/96 (23.9%) and 30/96 (31.2%) samples, respectively. No significant association was identified between the results of tests for detecting Babesia or Ehrlichia and those for detecting Leishmania (p-value>0.05). CONCLUSIONS: In the present study, we demonstrated co-infection with Ehrlichia or Babesia and Leishmania in dogs from Minas Gerais (Brazil); we also found that the serological tests that were used did not cross-react.
Resumo:
ABSTRACTINTRODUCTION: In the Americas, mucosal leishmaniasis is primarily associated with infection by Leishmania (Viannia) braziliensis. However, Leishmania (Viannia) guyanensis is another important cause of this disease in the Brazilian Amazon. In this study, we aimed at detecting Leishmaniadeoxyribonucleic acid (DNA) within paraffin-embedded fragments of mucosal tissues, and characterizing the infecting parasite species.METHODS: We evaluated samples collected from 114 patients treated at a reference center in the Brazilian Amazon by polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP) analyses.RESULTS: Direct examination of biopsy imprints detected parasites in 10 of the 114 samples, while evaluation of hematoxylin and eosin-stained slides detected amastigotes in an additional 17 samples. Meanwhile, 31/114 samples (27.2%) were positive for Leishmania spp. kinetoplast deoxyribonucleic acid (kDNA) by PCR analysis. Of these, 17 (54.8%) yielded amplification of the mini-exon PCR target, thereby allowing for PCR-RFLP-based identification. Six of the samples were identified as L. (V.) braziliensis, while the remaining 11 were identified as L. (V.) guyanensis.CONCLUSIONS: The results of this study demonstrate the feasibility of applying molecular techniques for the diagnosis of human parasites within paraffin-embedded tissues. Moreover, our findings confirm that L. (V.) guyanensisis a relevant causative agent of mucosal leishmaniasis in the Brazilian Amazon.
Resumo:
Accepted Manuscript
Resumo:
In this work we perform a comparison of two different numerical schemes for the solution of the time-fractional diffusion equation with variable diffusion coefficient and a nonlinear source term. The two methods are the implicit numerical scheme presented in [M.L. Morgado, M. Rebelo, Numerical approximation of distributed order reaction- diffusion equations, Journal of Computational and Applied Mathematics 275 (2015) 216-227] that is adapted to our type of equation, and a colocation method where Chebyshev polynomials are used to reduce the fractional differential equation to a system of ordinary differential equations
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
Objetivo Adaptar para o Brasil uma versão portuguesa do Caregiver Reaction Assessment (CRA) e gerar indicadores preliminares de validade e fidedignidade para sua aplicação em cuidadores de pacientes oncológicos internados. Métodos Participaram voluntariamente 53 cuidadores, que responderam a um questionário sociodemográfico, ao CRA e à Escala de Bem-Estar Psicológico (EBEP). A unidimensionalidade e a homogeneidade dos escores do CRA foram avaliadas por meio de análise de componentes principais e de consistência interna, respectivamente. Correlações de Pearson entre escores do CRA e EBEP foram examinadas e utilizadas como indicadores de validade divergente e de construto. Resultados As cinco escalas que compõem o CRA apresentaram bons níveis de unidimensionalidade e homogeneidade, porém as escalas de impacto nas finanças e impacto na saúde obtiveram alfas insuficientes (< 0,7). O escore total do CRA apresentou alfa elevado (0,886). Correlações entre o CRA e a EBEP produziram coeficientes teoricamente interpretáveis, com magnitudes variando entre nulas e moderadas. Conclusão O CRA apresentou bons indicadores de validade e fidedignidade. Algumas adaptações em relação ao conteúdo de determinados itens se mostram, todavia, necessárias, a fim de serem calibradas ao contexto de pessoas atendidas por serviços subsidiados pelo Sistema Único de Saúde.
Resumo:
In this chapter, a complete characterization of the angular velocity and angular acceleration for rigid bodies in spatial multibody systems are presented. For both cases, local and global formulations are described taking into account the advantages of using Euler parameters. In this process, the transformation between global and local components of the angular velocity and time derivative of the Euler parameters are analyzed and discussed in this chapter.
Resumo:
"Series title: Springerbriefs in applied sciences and technology, ISSN 2191-530X"
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2011
Resumo:
Convex cone, toric variety, graph theory, electrochemical catalysis, oxidation of formic acid, feedback-loopsbifurcations, enzymatic catalysis, Peroxidase reaction, Shil'nikov chaos