29 resultados para Métodos de interpolação
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.
Resumo:
Present day weather forecast models usually cannot provide realistic descriptions of local and particulary extreme weather conditions. However, for lead times of about a small number of days, they provide reliable forecast of the atmospheric circulation that encompasses the subscale processes leading to extremes. Hence, forecasts of extreme events can only be achieved through a combination of dynamical and statistical analysis methods, where a stable and significant statistical model based on prior physical reasoning establishes posterior statistical-dynamical model between the local extremes and the large scale circulation. Here we present the development and application of such a statistical model calibration on the besis of extreme value theory, in order to derive probabilistic forecast for extreme local temperature. The dowscaling applies to NCEP/NCAR re-analysis, in order to derive estimates of daily temperature at Brazilian northeastern region weather stations
Resumo:
Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
Natural ventilation is an efficient bioclimatic strategy, one that provides thermal comfort, healthful and cooling to the edification. However, the disregard for quality environment, the uncertainties involved in the phenomenon and the popularization of artificial climate systems are held as an excuse for those who neglect the benefits of passive cooling. The unfamiliarity with the concept may be lessened if ventilation is observed in every step of the project, especially in the initial phase in which decisions bear a great impact in the construction process. The tools available in order to quantify the impact of projected decisions consist basically of the renovation rate calculations or computer simulations of fluids, commonly dubbed CFD, which stands for Computational Fluid Dynamics , both somewhat apart from the project s execution and unable to adapt for use in parametric studies. Thus, we chose to verify, through computer simulation, the representativeness of the results with a method of simplified air reconditioning rate calculation, as well as making it more compatible with the questions relevant to the first phases of the project s process. The case object consists of a model resulting from the recommendations of the Código de Obras de Natal/ RN, customized according to the NBR 15220. The study has shown the complexity in aggregating a CFD tool to the process and the need for a method capable of generating data at the compatible rate to the flow of ideas and are discarded during the project s development. At the end of our study, we discuss the necessary concessions for the realization of simulations, the applicability and the limitations of both the tools used and the method adopted, as well as the representativeness of the results obtained
Resumo:
The assessment of building thermal performance is often carried out using HVAC energy consumption data, when available, or thermal comfort variables measurements, for free-running buildings. Both types of data can be determined by monitoring or computer simulation. The assessment based on thermal comfort variables is the most complex because it depends on the determination of the thermal comfort zone. For these reasons, this master thesis explores methods of building thermal performance assessment using variables of thermal comfort simulated by DesignBuilder software. The main objective is to contribute to the development of methods to support architectural decisions during the design process, and energy and sustainable rating systems. The research method consists on selecting thermal comfort methods, modeling them in electronic sheets with output charts developed to optimize the analyses, which are used to assess the simulation results of low cost house configurations. The house models consist in a base case, which are already built, and changes in thermal transmittance, absorptance, and shading. The simulation results are assessed using each thermal comfort method, to identify the sensitivity of them. The final results show the limitations of the methods, the importance of a method that considers thermal radiance and wind speed, and the contribution of the chart proposed
Resumo:
Untreated effluents that reach surface water affect the aquatic life and humans. This study aimed to evaluate the wastewater s toxicity (municipal, industrial and shrimp pond effluents) released in the Estuarine Complex of Jundiaí- Potengi, Natal/RN, through chronic quantitative e qualitative toxicity tests using the test organism Mysidopsis Juniae, CRUSTACEA, MYSIDACEA (Silva, 1979). For this, a new methodology for viewing chronic effects on organisms of M. juniae was used (only renewal), based on another existing methodology to another testorganism very similar to M. Juniae, the M. Bahia (daily renewal).Toxicity tests 7 days duration were used for detecting effects on the survival and fecundity in M. juniae. Lethal Concentration 50% (LC50%) was determined by the Trimmed Spearman-Karber; Inhibition Concentration 50% (IC50%) in fecundity was determined by Linear Interpolation. ANOVA (One Way) tests (p = 0.05) were used to determinate the No Observed Effect Concentration (NOEC) and Low Observed Effect Concentration (LOEC). Effluents flows were measured and the toxic load of the effluents was estimated. Multivariate analysis - Principal Component Analysis (PCA) and Correspondence Analysis (CA) - identified the physic-chemical parameters better explain the patterns of toxicity found in survival and fecundity of M. juniae. We verified the feasibility of applying the only renewal system in chronic tests with M. Juniae. Most efluentes proved toxic on the survival and fecundity of M. Juniae, except for some shrimp pond effluents. The most toxic effluent was ETE Lagoa Aerada (LC50, 6.24%; IC50, 4.82%), ETE Quintas (LC50, 5.85%), Giselda Trigueiro Hospital (LC50, 2.05%), CLAN (LC50, 2.14%) and COTEMINAS (LC50, IC50 and 38.51%, 6.94%). The greatest toxic load was originated from ETE inefficient high flow effluents, textile effluents and CLAN. The organic load was related to the toxic effects of wastewater and hospital effluents in survival of M. Juniae, as well as heavy metals, total residual chlorine and phenols. In industrial effluents was found relationship between toxicity and organic load, phenols, oils and greases and benzene. The effects on fertility were related, in turn, with chlorine and heavy metals. Toxicity tests using other organisms of different trophic levels, as well as analysis of sediment toxicity are recommended to confirm the patterns found with M. Juniae. However, the results indicate the necessity for implementation and improvement of sewage treatment systems affluent to the Potengi s estuary
Resumo:
Chitin is an important structural component of the cellular wall of fungi and exoskeleton of many invertebrate plagues, such as insects and nematodes. In digestory systems of insects it forms a named matrix of peritrophic membrane. One of the most studied interaction models protein-carbohydrate is the model that involves chitin-binding proteins. Among the involved characterized domains already in this interaction if they detach the hevein domain (HD), from of Hevea brasiliensis (Rubber tree), the R&R consensus domain (R&R), found in cuticular proteins of insects, and the motif called in this study as conglicinin motif (CD), found in the cristallography structure of the β-conglicinin bounded with GlcNac. These three chitin-binding domains had been used to determine which of them could be involved in silico in the interaction of Canavalia ensiformis and Vigna unguiculata vicilins with chitin, as well as associate these results with the WD50 of these vicilins for Callosobruchus maculatus larvae. The technique of comparative modeling was used for construction of the model 3D of the vicilin of V. unguiculata, that was not found in the data bases. Using the ClustalW program it was gotten localization of these domains in the vicilins primary structure. The domains R&R and CD had been found with bigger homology in the vicilins primary sequences and had been target of interaction studies. Through program GRAMM models of interaction ( dockings ) of the vicilins with GlcNac had been gotten. The results had shown that, through analysis in silico, HD is not part of the vicilins structures, proving the result gotten with the alignment of the primary sequences; the R&R domain, although not to have structural similarity in the vicilins, probably it has a participation in the activity of interaction of these with GlcNac; whereas the CD domain participates directly in the interaction of the vicilins with GlcNac. These results in silico show that the amino acid number, the types and the amount of binding made for the CD motif with GlcNac seem to be directly associates to the deleterious power that these vicilins show for C. maculatus larvae. This can give an initial step in the briefing of as the vicilins interact with alive chitin in and exert its toxic power for insects that possess peritrophic membrane
Resumo:
Shrimp farming is one of the activities that contribute most to the growth of global aquaculture. However, this business has undergone significant economic losses due to the onset of viral diseases such as Infectious Myonecrosis (IMN). The IMN is already widespread throughout Northeastern Brazil and affects other countries such as Indonesia, Thailand and China. The main symptom of disease is myonecrosis, which consists of necrosis of striated muscles of the abdomen and cephalothorax of shrimp. The IMN is caused by infectious myonecrosis virus (IMNV), a non-enveloped virus which has protrusions along its capsid. The viral genome consists of a single molecule of double-stranded RNA and has two Open Reading Frames (ORFs). The ORF1 encodes the major capsid protein (MCP) and a potential RNA binding protein (RBP). ORF2 encodes a probable RNA-dependent RNA polymerase (RdRp) and classifies IMNV in Totiviridae family. Thus, the objective of this research was study the IMNV complete genome and encoded proteins in order to develop a system differentiate virus isolates based on polymorphisms presence. The phylogenetic relationship among some totivirus was investigated and showed a new group to IMNV within Totiviridae family. Two new genomes were sequenced, analyzed and compared to two other genomes already deposited in GenBank. The new genomes were more similar to each other than those already described. Conserved and variable regions of the genome were identified through similarity graphs and alignments using the four IMNV sequences. This analyze allowed mapping of polymorphic sites and revealed that the most variable region of the genome is in the first half of ORF1, which coincides with the regions that possibly encode the viral protrusion, while the most stable regions of the genome were found in conserved domains of proteins that interact with RNA. Moreover, secondary structures were predicted for all proteins using various softwares and protein structural models were calculated using threading and ab initio modeling approaches. From these analyses was possible to observe that the IMNV proteins have motifs and shapes similar to proteins of other totiviruses and new possible protein functions have been proposed. The genome and proteins study was essential for development of a PCR-based detection system able to discriminate the four IMNV isolates based on the presence of polymorphic sites
Resumo:
Tuberculosis is a serious disease, but curable in practically 100% of new cases, since complied the principles of modern chemotherapy. Isoniazid (ISN), Rifampicin (RIF), Pyrazinamide (PYR) and Chloride Ethambutol (ETA) are considered first line drugs in the treatment of tuberculosis, by combining the highest level of efficiency with acceptable degree of toxicity. Concerning USP 33 - NF28 (2010) the chromatography analysis to 3 of 4 drugs (ISN, PYR and RIF) last in average 15 minutes and 10 minutes more to obtain the 4th drug (ETA) using a column and mobile phase mixture different, becoming its industrial application unfavorable. Thus, many studies have being carried out to minimize this problem. An alternative would use the UFLC, which is based with the same principles of HPLC, however it uses stationary phases with particles smaller than 2 μm. Therefore, this study goals to develop and validate new analytical methods to determine simultaneously the drugs by HPLC/DAD and UFLC/DAD. For this, a analytical screening was carried out, which verified that is necessary a gradient of mobile phase system A (acetate buffer:methanol 94:6 v/v) and B (acetate buffer:acetonitrile 55:45 v/v). Furthermore, to the development and optimization of the method in HPLC and UFLC, with achievement of the values of system suitability into the criteria limits required for both techniques, the validations have began. Standard solutions and tablets test solutions were prepared and injected into HPLC and UFLC, containing 0.008 mg/mL ISN, 0.043 mg/mL PYR, 0.030 mg.mL-1 ETA and 0.016 mg/mL RIF. The validation of analytical methods for HPLC and UFLC was carried out with the determination of specificity/selectivity, analytical curve, linearity, precision, limits of detection and quantification, accuracy and robustness. The methods were adequate for determination of 4 drugs separately without interfered with the others. Precise, due to the fact of the methods demonstrated since with the days variation, besides the repeatability, the values were into the level required by the regular agency. Linear (R> 0,99), once the methods were capable to demonstrate results directly proportional to the concentration of the analyte sample, within of specified range. Accurate, once the methods were capable to present values of variation coefficient and recovery percentage into the required limits (98 to 102%). The methods showed LOD and LOQ very low showing the high sensitivity of the methods for the four drugs. The robustness of the methods were evaluate, facing the temperature and flow changes, where they showed robustness just with the preview conditions established of temperature and flow, abrupt changes may influence with the results of methods
Resumo:
It is a descriptive study with a qualitative approach, and an action-research type, which aimed to analyze the changes of knowledge about contraceptive methods invested to a teenager group attended in Igapó Family Healthcare Unit, in the city of Natal/RN, after consent and institutional assent of Ethics Committee of Rio Grande do Norte Federal University (Protocol No. 131/07). It were researched 16 teenagers of both sexes, with age ranging from 11 to 16 years. We used two structured questionnaires, one in the initial diagnosis and another during the seven meetings of the focus group, in addition to the field notes and the meetings discussions transcriptions. The data-collection was performed in the period of two months by a team composed by a nurse the research coordinator, a dentist, a nursing assistant, a community-based healthcare worker and a nursing academic. The quantitative and qualitative data were organized, tagged and categorized into spreadsheet in Microsoft Excel, being held a thematic analysis of speeches performed by the study participants. The results were presented as tables, graphics, photos, drawings and word clippings. The educational strategy developed in focus group allowed adolescents to discuss, exchange ideas and opinions on several contraceptive methods, providing expansion in knowledge of all contraceptives discussed, especially those natural and surgical, which were less mentioned at the beginning of the study. Among the advantages of the contraceptive methods listed by teenagers, was highlighted avoiding pregnancy and STDs in use of the barrier method of condom. As for the disadvantages more frequently noted by the survey with the misuse of barrier methods, was highlighted get pregnant, acquire STD's and do not prevent STD's in hormonal, natural and surgical methods. Adolescents showed consistency between the advantages and disadvantages and types of contraceptive methods, showing a widening in knowledge among them. It may be said that, in general, those surveyed had a good understanding about the use of the various contraceptive methods. Thus, the study participants had positively evaluated all the criteria used to qualify the meetings in the focus group. The action strategy of the focus group should be encouraged by professionals who work with teenagers, since they prefer to live in groups, one characteristic of adolescence.
Resumo:
This study presents a comparative analysis of methodologies about weighted factors considered in the selection of areas for deployment of Sanitary Landfills, applying the methodologies of classification criteria with scoring bands Gomes, Coelho, Erba & Veronez (2000); Waquil et al, 2000. That means, we have the Scoring System used by Union of Municipalities of Bahia and the Quality Index Landfill Waste (IQR) which are applyed for this study in Massaranduba Sanitary Landfill located in the municipality of Ceará Mirim /RN, northeastern of Brazil. The study was conducted in order to classify the methodologies and to give support for future studies on environmental management segment, with main goal to propose suited methodologies which allow safety and rigor during the selection, deployment and management of sanitary landfill, in the Brazilian municipalities, in order to help them in the process to extinction of their dumps, in according of Brazilian Nacional Plan of Solid Waste. During this investigation we have studied the characteristics of the site as morphological, hydrogeological, environmental and socio-economic to permit the installation. We consider important to mention the need of deployment from Rio Grande do Norte State Secretary of Environment and Water (SEMARH), Institute of Sustainable Development and Environment of RN (IDEMA), as well, from Federal and Municipal Governments a public policies for the integrated management of urban solid waste that address environmental preservation and improvement of health conditions of the population of the Rio Grande do Norte
Resumo:
The reconfiguration of a distribution network is a change in its topology, aiming to provide specific operation conditions of the network, by changing the status of its switches. It can be performed regardless of any system anomaly. The service restoration is a particular case of reconfiguration and should be performed whenever there is a network failure or whenever one or more sections of a feeder have been taken out of service for maintenance. In such cases, loads that are supplied through lines sections that are downstream of portions removed for maintenance may be supplied by the closing of switches to the others feeders. By classical methods of reconfiguration, several switches may be required beyond those used to perform the restoration service. This includes switching feeders in the same substation or for substations that do not have any direct connection to the faulted feeder. These operations can cause discomfort, losses and dissatisfaction among consumers, as well as a negative reputation for the energy company. The purpose of this thesis is to develop a heuristic for reconfiguration of a distribution network, upon the occurrence of a failure in this network, making the switching only for feeders directly involved in this specific failed segment, considering that the switching applied is related exclusively to the isolation of failed sections and bars, as well as to supply electricity to the islands generated by the condition, with significant reduction in the number of applications of load flows, due to the use of sensitivity parameters for determining voltages and currents estimated on bars and lines of the feeders directly involved with that failed segment. A comparison between this process and classical methods is performed for different test networks from the literature about networks reconfiguration
Resumo:
The Electrical Submersible Pump (ESP) has been one of the most appropriate solutions for lifting method in onshore and offshore applications. The typical features for this application are adverse temperature, viscosity fluids and gas environments. The difficulties in equipments maintenance and setup contributing to increasing costs of oil production in deep water, therefore, the optimization through automation can be a excellent approach for decrease costs and failures in subsurface equipment. This work describe a computer simulation related with the artificial lifting method ESP. This tool support the dynamic behavior of ESP approach, considering the source and electric energy transmission model for the motor, the electric motor model (including the thermal calculation), flow tubbing simulation, centrifugal pump behavior simulation with liquid nature effects and reservoir requirements. In addition, there are tri-dimensional animation for each ESP subsytem (transformer, motor, pump, seal, gas separator, command unit). This computer simulation propose a improvement for monitoring oil wells for maximization of well production. Currenty, the proprietaries simulators are based on specific equipments manufactures. Therefore, it is not possible simulation equipments of another manufactures. In the propose approach there are support for diverse kinds of manufactures equipments
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables