6 resultados para Taxa de filtração glomerular
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Eutrophication is a growing process present in the water sources located in the northeast of Brazil. Among the main consequences of these changes in trophic levels of a water source, stands out adding complexity to the treatment to achieve water standards. By these considerations, this study aimed to define, on a laboratory scale, products and operational conditions to be applied in the processing steps using raw water from Gargalheiras dam, RN, Brazil. The dam mentioned shows a high number of cyanobacteria, with a concentration of cells / ml higher than that established by Decree No. 518/04 MS. The same source was also considered by the state environmental agency in 2009 as hypereutrophic. The static tests developed in this research simulated direct filtration (laboratory filters) and pre-oxidation with chlorine and powdered activated carbon adsorption. The research included the evaluation of the coagulants aluminum hydrochloride (HCA) and alum (SA). The development of the research investigated the conditions for rapid mixing, the dosages of coagulants and pHs of coagulation by the drawing of diagrams. The interference of filtration rate and particle size of filtering means were evaluated as samples and the time of contact were tested with chlorine and activated carbon. By the results of the characterization of the raw water source it was possible to identify the presence of a high pH (7.34). The true color was significant (29 uH) in relation to the apparent color and turbidity (66 uH and 13.60 NTU), reflecting in the measurement of organic matter: MON (8.41 mg.L-1) and Abs254 (0.065 cm-1). The optimization of quick mix set time of 17", the speed gradient of 700 s-1 in the coagulation with HCA and the time of 20" with speed gradient of 800 s-1 for SA. The smaller particle sizes of sand filtering means helped the treatment and the variation in filtration rate did not affect significantly the efficiency of the process. The evaluation of the processing steps found adjustment in standard color and turbidity of the Decree nº 518/04 MS, taking in consideration the average values found in raw water. In the treatment using the HCA for direct filtration the palatable pattern based on the apparent color can be achieved with a dose of 25 mg L-1. With the addition of pre-oxidation step, the standard result was achieved with a reduced dose for 12 mgHCA.L-1. The turbidity standard for water was obtained by direct filtration when the dose exceeds 25 mg L-1 of HCA. With pre-oxidation step there is the possibility of reducing the dose to 20 mg L-1.The addition of CAP adsorption, promoted drinking water for both parameters, with even lower dosage, 13 mg L-1 of HCA. With coagulation using SA removal required for the parameter of apparent color it was achieved with pre-oxidation and 22 mgSA.L-1. Despite the satisfactory results of treatment with the alum, it was not possible to provide water with turbidity less than 1.00 NTU even with the use of all stages of treatment
Resumo:
Deep bed filtration occurs in several industrial and environmental processes like water filtration and soil contamination. In petroleum industry, deep bed filtration occurs near to injection wells during water injection, causing injectivity reduction. It also takes place during well drilling, sand production control, produced water disposal in aquifers, etc. The particle capture in porous media can be caused by different physical mechanisms (size exclusion, electrical forces, bridging, gravity, etc). A statistical model for filtration in porous media is proposed and analytical solutions for suspended and retained particles are derived. The model, which incorporates particle retention probability, is compared with the classical deep bed filtration model allowing a physical interpretation of the filtration coefficients. Comparison of the obtained analytical solutions for the proposed model with the classical model solutions allows concluding that the larger the particle capture probability, the larger the discrepancy between the proposed and the classical models
Resumo:
Telecommunication is one of the most dynamic and strategic areas in the world. Many technological innovations has modified the way information is exchanged. Information and knowledge are now shared in networks. Broadband Internet is the new way of sharing contents and information. This dissertation deals with performance indicators related to maintenance services of telecommunications networks and uses models of multivariate regression to estimate churn, which is the loss of customers to other companies. In a competitive environment, telecommunications companies have devised strategies to minimize the loss of customers. Loosing customers presents a higher cost than obtaining new ones. Corporations have plenty of data stored in a diversity of databases. Usually the data are not explored properly. This work uses the Knowledge Discovery in Databases (KDD) to establish rules and new models to explain how churn, as a dependent variable, are related to a diversity of service indicators, such as time to deploy the service (in hours), time to repair (in hours), and so on. Extraction of meaningful knowledge is, in many cases, a challenge. Models were tested and statistically analyzed. The work also shows results that allows the analysis and identification of which quality services indicators influence the churn. Actions are also proposed to solve, at least in part, this problem
Resumo:
Hebb postulated that memory could be stored thanks to the synchronous activity of many neurons, building a neural assembly. Knowing of the importance of the hippocampal structure to the formation of new explicit memories, we used electrophysiological recording of multiple neurons to access the relevance of rate coding from neural firing rates in comparison to the temporal coding of neural assemblies activity in the consolidation of an aversive memory in rats. Animals were trained at the discriminative avoidance task using a modified elevated plus-maze. During experimental sessions, slow wave sleep periods (SWS) were recorded. Our results show an increase in the identified neural assemblies activity during post-training SWS, but not for the neural firing rate. In summary, we demonstrate that for this particular task, the relevant information needed for a proper memory consolidation lies within the temporal patters of synchronized neural activity, not in its firing rate
Resumo:
Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais