30 resultados para Métodos matemáticos

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Waste stabilization ponds (WSP) have been widely used for sewage treatment in hot climate regions because they are economic and environmentally sustainable. In the present study a WSP complex comprising a primary facultative pond (PFP) followed by two maturation ponds (MP-1 and MP-2) was studied, in the city of Natal-RN. The main objective was to study the bio-degradability of organic matter through the determination of the kinetic constant k throughout the system. The work was carried out in two phases. In the first, the variability in BOD, COD and TOC concentrations and an analysis of the relations between these parameters, in the influent raw sewage, pond effluents and in specific areas inside the ponds was studied. In the second stage, the decay rate for organic matter (k) was determined throughout the system based on BOD tests on the influent sewage, pond effluents and water column samples taken from fixed locations within the ponds, using the mathematical methods of Least Squares and the Thomas equation. Subsequently k was estimated as a function of a hydrodynamic model determined from the dispersion number (d), using empirical methods and a Partial Hydrodynamic Evaluation (PHE), obtained from tracer studies in a section of the primary facultative pond corresponding to 10% of its total length. The concentrations of biodegradable organic matter, measured as BOD and COD, gradually reduced through the series of ponds, giving overall removal efficiencies of 71.95% for BOD and of 52.45% for COD. Determining the values for k, in the influent and effluent samples of the ponds using the mathematical method of Least Squares, gave the following values respectively: primary facultative pond (0,23 day-1 and 0,09 day-1), maturation 1 (0,04 day-1 and 0,03 day-1) and maturation 2 (0,03 day-1 and 0,08 day-1). When using the Thomas method, the values of k in the influents and effluents of the ponds were: primary facultative pond (0,17 day-1 and 0,07 day-1), maturation 1 (0,02 day-1 and 0,01 day-1) and maturation 2 (0,01 day-1 and 0,02 day-1). From the Partial Hydrodynamic Evaluation, in the first section of the facultative pond corresponding to 10% of its total length, it can be concluded from the dispersion number obtained of d = 0.04, that the hydraulic regime is one of dispersed flow with a kinetic constant value of 0.20 day-1

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stabilization pond is the main technology used for treatment wastewater, in northeast Brazil, due to lower cost of deployment, operation and maintenance compared to other technologies. Most systems of stabilization ponds has been in operation for some time, on average 10 years of operation, receiving high organic loads and do not have good removal efficiencies of the main parameters for which have been designed. Therefore it is necessary to work to quantify the efficiency of current systems. This study evaluated the biodegradability of organic matter in raw sewage, the removal of organic matter in reactors and determination of the kinetic constant removal of organic matter (k), both in reactors and in raw sewage, based on the analysis made in the laboratory and through mathematical methods proposed in the literature, in nine systems stabilization ponds, located in Rio Grande do Norte. In relation the degradation kinetics in stabilization ponds, it was observed that many papers published in the literature were obtained in pilot-scale systems, which often, due to the action of external factors such as wind and temperature, these can t be considered as a reference in the analysis of the kinetic constant K, so the need for more research into systems of scale. This study had three distinct phases and simultaneous, routine monitoring, study of the daily cycle and the determination of kinetic constant of degradation of organic matter (K). The monitoring showed that the removal efficiencies of organic matter on most systems were lower than suggested by the literature, the best efficiencies of around 76% (BOD) and 72% (COD) and the worst of the order of 48% (BOD) and 55% (COD). The calculation of K in raw sewage (Ke) was within the range of variation expected in the literature (0.35 to 0.60 days-1). Already for the results obtained for K in the reactors (Kr), there were well below the values recommended in the literature (0.25 to 0.40 d-1 for complete mix and from 0.13 to 0.17 d-1 for flow dispersed), in line with the overloads that organic systems are subject

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work has as main objective to find mathematical models based on linear parametric estimation techniques applied to the problem of calculating the grow of gas in oil wells. In particular we focus on achieving grow models applied to the case of wells that produce by plunger-lift technique on oil rigs, in which case, there are high peaks in the grow values that hinder their direct measurement by instruments. For this, we have developed estimators based on recursive least squares and make an analysis of statistical measures such as autocorrelation, cross-correlation, variogram and the cumulative periodogram, which are calculated recursively as data are obtained in real time from the plant in operation; the values obtained for these measures tell us how accurate the used model is and how it can be changed to better fit the measured values. The models have been tested in a pilot plant which emulates the process gas production in oil wells

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural ventilation is an efficient bioclimatic strategy, one that provides thermal comfort, healthful and cooling to the edification. However, the disregard for quality environment, the uncertainties involved in the phenomenon and the popularization of artificial climate systems are held as an excuse for those who neglect the benefits of passive cooling. The unfamiliarity with the concept may be lessened if ventilation is observed in every step of the project, especially in the initial phase in which decisions bear a great impact in the construction process. The tools available in order to quantify the impact of projected decisions consist basically of the renovation rate calculations or computer simulations of fluids, commonly dubbed CFD, which stands for Computational Fluid Dynamics , both somewhat apart from the project s execution and unable to adapt for use in parametric studies. Thus, we chose to verify, through computer simulation, the representativeness of the results with a method of simplified air reconditioning rate calculation, as well as making it more compatible with the questions relevant to the first phases of the project s process. The case object consists of a model resulting from the recommendations of the Código de Obras de Natal/ RN, customized according to the NBR 15220. The study has shown the complexity in aggregating a CFD tool to the process and the need for a method capable of generating data at the compatible rate to the flow of ideas and are discarded during the project s development. At the end of our study, we discuss the necessary concessions for the realization of simulations, the applicability and the limitations of both the tools used and the method adopted, as well as the representativeness of the results obtained

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assessment of building thermal performance is often carried out using HVAC energy consumption data, when available, or thermal comfort variables measurements, for free-running buildings. Both types of data can be determined by monitoring or computer simulation. The assessment based on thermal comfort variables is the most complex because it depends on the determination of the thermal comfort zone. For these reasons, this master thesis explores methods of building thermal performance assessment using variables of thermal comfort simulated by DesignBuilder software. The main objective is to contribute to the development of methods to support architectural decisions during the design process, and energy and sustainable rating systems. The research method consists on selecting thermal comfort methods, modeling them in electronic sheets with output charts developed to optimize the analyses, which are used to assess the simulation results of low cost house configurations. The house models consist in a base case, which are already built, and changes in thermal transmittance, absorptance, and shading. The simulation results are assessed using each thermal comfort method, to identify the sensitivity of them. The final results show the limitations of the methods, the importance of a method that considers thermal radiance and wind speed, and the contribution of the chart proposed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chitin is an important structural component of the cellular wall of fungi and exoskeleton of many invertebrate plagues, such as insects and nematodes. In digestory systems of insects it forms a named matrix of peritrophic membrane. One of the most studied interaction models protein-carbohydrate is the model that involves chitin-binding proteins. Among the involved characterized domains already in this interaction if they detach the hevein domain (HD), from of Hevea brasiliensis (Rubber tree), the R&R consensus domain (R&R), found in cuticular proteins of insects, and the motif called in this study as conglicinin motif (CD), found in the cristallography structure of the β-conglicinin bounded with GlcNac. These three chitin-binding domains had been used to determine which of them could be involved in silico in the interaction of Canavalia ensiformis and Vigna unguiculata vicilins with chitin, as well as associate these results with the WD50 of these vicilins for Callosobruchus maculatus larvae. The technique of comparative modeling was used for construction of the model 3D of the vicilin of V. unguiculata, that was not found in the data bases. Using the ClustalW program it was gotten localization of these domains in the vicilins primary structure. The domains R&R and CD had been found with bigger homology in the vicilins primary sequences and had been target of interaction studies. Through program GRAMM models of interaction ( dockings ) of the vicilins with GlcNac had been gotten. The results had shown that, through analysis in silico, HD is not part of the vicilins structures, proving the result gotten with the alignment of the primary sequences; the R&R domain, although not to have structural similarity in the vicilins, probably it has a participation in the activity of interaction of these with GlcNac; whereas the CD domain participates directly in the interaction of the vicilins with GlcNac. These results in silico show that the amino acid number, the types and the amount of binding made for the CD motif with GlcNac seem to be directly associates to the deleterious power that these vicilins show for C. maculatus larvae. This can give an initial step in the briefing of as the vicilins interact with alive chitin in and exert its toxic power for insects that possess peritrophic membrane

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shrimp farming is one of the activities that contribute most to the growth of global aquaculture. However, this business has undergone significant economic losses due to the onset of viral diseases such as Infectious Myonecrosis (IMN). The IMN is already widespread throughout Northeastern Brazil and affects other countries such as Indonesia, Thailand and China. The main symptom of disease is myonecrosis, which consists of necrosis of striated muscles of the abdomen and cephalothorax of shrimp. The IMN is caused by infectious myonecrosis virus (IMNV), a non-enveloped virus which has protrusions along its capsid. The viral genome consists of a single molecule of double-stranded RNA and has two Open Reading Frames (ORFs). The ORF1 encodes the major capsid protein (MCP) and a potential RNA binding protein (RBP). ORF2 encodes a probable RNA-dependent RNA polymerase (RdRp) and classifies IMNV in Totiviridae family. Thus, the objective of this research was study the IMNV complete genome and encoded proteins in order to develop a system differentiate virus isolates based on polymorphisms presence. The phylogenetic relationship among some totivirus was investigated and showed a new group to IMNV within Totiviridae family. Two new genomes were sequenced, analyzed and compared to two other genomes already deposited in GenBank. The new genomes were more similar to each other than those already described. Conserved and variable regions of the genome were identified through similarity graphs and alignments using the four IMNV sequences. This analyze allowed mapping of polymorphic sites and revealed that the most variable region of the genome is in the first half of ORF1, which coincides with the regions that possibly encode the viral protrusion, while the most stable regions of the genome were found in conserved domains of proteins that interact with RNA. Moreover, secondary structures were predicted for all proteins using various softwares and protein structural models were calculated using threading and ab initio modeling approaches. From these analyses was possible to observe that the IMNV proteins have motifs and shapes similar to proteins of other totiviruses and new possible protein functions have been proposed. The genome and proteins study was essential for development of a PCR-based detection system able to discriminate the four IMNV isolates based on the presence of polymorphic sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water injection is the most widely used method for supplementary recovery in many oil fields due to various reasons, like the fact that water is an effective displacing agent of low viscosity oils, the water injection projects are relatively simple to establish and the water availability at a relatively low cost. For design of water injection projects is necessary to do reservoir studies in order to define the various parameters needed to increase the effectiveness of the method. For this kind of study can be used several mathematical models classified into two general categories: analytical or numerical. The present work aims to do a comparative analysis between the results presented by flow lines simulator and conventional finite differences simulator; both types of simulators are based on numerical methods designed to model light oil reservoirs subjected to water injection. Therefore, it was defined two reservoir models: the first one was a heterogeneous model whose petrophysical properties vary along the reservoir and the other one was created using average petrophysical properties obtained from the first model. Comparisons were done considering that the results of these two models were always in the same operational conditions. Then some rock and fluid parameters have been changed in both models and again the results were compared. From the factorial design, that was done to study the sensitivity analysis of reservoir parameters, a few cases were chosen to study the role of water injection rate and the vertical position of wells perforations in production forecast. It was observed that the results from the two simulators are quite similar in most of the cases; differences were found only in those cases where there was an increase in gas solubility ratio of the model. Thus, it was concluded that in flow simulation of reservoirs analogous of those now studied, mainly when the gas solubility ratio is low, the conventional finite differences simulator may be replaced by flow lines simulator the production forecast is compatible but the computational processing time is lower.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relation between metabolic demand and maximal oxygen consumption during exercise have been investigated in different areas of knowledge. In the health field, the determination of maximal oxygen consumption (VO2max) is considered a method to classify the level of physical fitness or the risk of cardiocirculatory diseases. The accuracy to obtain data provides a better evaluation of functional responses and allows a reduction in the error margin at the moment of risk classification, as well as, at the moment of determination of aerobic exercise work load. In Brasil, the use of respirometry associated to ergometric test became an opition in the cardiorespiratory evaluation. This equipment allows predictions concerning the oxyredutase process, making it possible to identify physiological responses to physical effort as the respiratory threshold. This thesis focused in the development of mathematical models developed by multiple regression validated by the stepwise method, aiming to predict the VO2max based on respiratory responses to physical effort. The sample was composed of a ramdom sample of 181 healthy individuals, men and women, that were randomized to two groups: regression group and cross validation group (GV). The voluntiars were submitted to a incremental treadmill test; objetiving to determinate of the second respiratory threshold (LVII) and the Peak VO2max. Using the método forward addition method 11 models of VO2max prediction in trendmill were developded. No significative differences were found between the VO2max meansured and the predicted by models when they were compared using ANOVA One-Way and the Post Hoc test of Turkey. We concluded that the developed mathematical models allow a prediction of the VO2max of healthy young individuals based on the LVII

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although it has been suggested that retinal vasculature is a diffusion-limited aggregation (DLA) fractal, no study has been dedicated to standardizing its fractal analysis . The aims of this project was to standardize a method to estimate the fractal dimensions of retinal vasculature and to characterize their normal values; to determine if this estimation is dependent on skeletization and on segmentation and calculation methods; to assess the suitability of the DLA model and to determine the usefulness of log-log graphs in characterizing vasculature fractality . To achieve these aims, the information, mass-radius and box counting dimensions of 20 eyes vasculatures were compared when the vessels were manually or computationally segmented; the fractal dimensions of the vasculatures of 60 eyes of healthy volunteers were compared with those of 40 DLA models and the log-log graphs obtained were compared with those of known fractals and those of non-fractals. The main results were: the fractal dimensions of vascular trees were dependent on segmentation methods and dimension calculation methods, but there was no difference between manual segmentation and scale-space, multithreshold and wavelet computational methods; the means of the information and box dimensions for arteriolar trees were 1.29. against 1.34 and 1.35 for the venular trees; the dimension for the DLA models were higher than that for vessels; the log-log graphs were straight, but with varying local slopes, both for vascular trees and for fractals and non-fractals. This results leads to the following conclusions: the estimation of the fractal dimensions for retinal vasculature is dependent on its skeletization and on the segmentation and calculation methods; log-log graphs are not suitable as a fractality test; the means of the information and box counting dimensions for the normal eyes were 1.47 and 1.43, respectively, and the DLA model with optic disc seeding is not sufficient for retinal vascularization modeling

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tuberculosis is a serious disease, but curable in practically 100% of new cases, since complied the principles of modern chemotherapy. Isoniazid (ISN), Rifampicin (RIF), Pyrazinamide (PYR) and Chloride Ethambutol (ETA) are considered first line drugs in the treatment of tuberculosis, by combining the highest level of efficiency with acceptable degree of toxicity. Concerning USP 33 - NF28 (2010) the chromatography analysis to 3 of 4 drugs (ISN, PYR and RIF) last in average 15 minutes and 10 minutes more to obtain the 4th drug (ETA) using a column and mobile phase mixture different, becoming its industrial application unfavorable. Thus, many studies have being carried out to minimize this problem. An alternative would use the UFLC, which is based with the same principles of HPLC, however it uses stationary phases with particles smaller than 2 μm. Therefore, this study goals to develop and validate new analytical methods to determine simultaneously the drugs by HPLC/DAD and UFLC/DAD. For this, a analytical screening was carried out, which verified that is necessary a gradient of mobile phase system A (acetate buffer:methanol 94:6 v/v) and B (acetate buffer:acetonitrile 55:45 v/v). Furthermore, to the development and optimization of the method in HPLC and UFLC, with achievement of the values of system suitability into the criteria limits required for both techniques, the validations have began. Standard solutions and tablets test solutions were prepared and injected into HPLC and UFLC, containing 0.008 mg/mL ISN, 0.043 mg/mL PYR, 0.030 mg.mL-1 ETA and 0.016 mg/mL RIF. The validation of analytical methods for HPLC and UFLC was carried out with the determination of specificity/selectivity, analytical curve, linearity, precision, limits of detection and quantification, accuracy and robustness. The methods were adequate for determination of 4 drugs separately without interfered with the others. Precise, due to the fact of the methods demonstrated since with the days variation, besides the repeatability, the values were into the level required by the regular agency. Linear (R> 0,99), once the methods were capable to demonstrate results directly proportional to the concentration of the analyte sample, within of specified range. Accurate, once the methods were capable to present values of variation coefficient and recovery percentage into the required limits (98 to 102%). The methods showed LOD and LOQ very low showing the high sensitivity of the methods for the four drugs. The robustness of the methods were evaluate, facing the temperature and flow changes, where they showed robustness just with the preview conditions established of temperature and flow, abrupt changes may influence with the results of methods