13 resultados para tamanho ótimo de parcela

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malaria is a disease of global distribution, recognized by governments around the world as a serious public health problem, affecting more than 109 countries and territories and endangering more than 3.3 billion people. The economic costs of this disease are also relevant: the African continent itself has malaria-related costs of about $ 12 billion annually. Nowadays, in addition to chloroquine, Plasmodium falciparum is resistant to many drugs used in the treatment of malaria, such as amodiaquine, mefloquine, quinine and sulfadoxine-pyrimethamine; resistance of Plasmodium vivax to treatments, although less studied, is also reported. Nature, in general, is responsible for the production of most known organic substances, and the plant kingdom is responsible for the most of the chemical diversity known and reported in the literature. Most medicinal plants commercialized in Brazil, however, are of exotic origin, which makes the search for endemic medicinal plants, besides a patent necessity, a fascinating subject of academic research and development. This study aimed to: (i) verify the antimalarial activity of ethanolic and hydroalcoholic extracts of Boerhavia paniculata Rich. And acetonic extract of Clethra scabra Pers. in Swiss albino mice infected by Plasmodium berghei NK65, (ii) observe possible combined effects between the course of infection by P. berghei NK65 and administration of these extracts in Swiss albino mice, and (iii) conduct a preliminary study of the acute toxicity of these extracts in Swiss albino mice. All extracts notable pharmacological activities - with parasite infections inhibitions ranging from 22% to 54%.These characteristics suggest that the activities are relevant, although comparatively lower than the activity displayed by the positive control group (always above 90%). The general framework of survival analysis demonstrates an overall reduction in survival times for all groups. Necroscopy has not pointed no change in color, shape, size and/or consistency in the evaluated organs - the only exception was the livers of rats submitted to treatment to hydroalcoholic extracts: these organs have been presented in a slightly congestive aspect with mass increasing roughly 28% higher than the other two groups and a p-value of 0.0365. The 250 mg/Kg ethanolic group has been pointed out by the Dunn s post test, as the only class with simultaneous inequalities (p<0.05) between positive and negative control groups. The extracts, notably ethanol extract, have, in fact, a vestigial antimalarial activity, although well below from the ones perceived to chloroquine-treated groups; nevertheless, the survival times of the animals fed with the extracts do not rise by presence of such therapy. Both the toxicopharmacological studies of the synergism between the clinical course of malaria and administration of extracts and the isolated evaluation of toxicity allow us to affirm the absence of toxicity of the extracts at the level of CNS and ANS, as well as their non-influence on food and water consumption patterns, until dosages of 500 mg/Kg. Necroscopic analysis leads us to deduct a possible hepatotoxic effect of hydroalcoholic extract at dosages of 500 mg/Kg, and an innocuous tissue activity of the ethanol extract, in the same dosage. We propose a continuation of the studies of these extracts, with protocol modifications capable of addressing more clearly and objectively their pharmacological and toxicological aspects

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In northeastern Brazil, Octopus insularis is the most commercially important cephalopod species and its capture has been performed for several years by the lobster fishermen in the region. In order to obtain information about the reproductive biology, 1108 specimens were collected between November 2009 and September 2011 in the landings and fish markets of Rio do Fogo (RN). For each specimen the mantle length (CM) and total fresh weight (PT) were recorded. Gonads of 264 males and 295 females were examined macroscopically and histologically to assess sexual maturation and determine reproductive indices. Four reproductive stages were determined for males (immature, maturing, mature and post-mature) and females (immature, early maturing, late maturing and mature). The average of eggs recorded in the female s gonads was 93.820 and 39 was the average of spermatophores found in male Needham s complex. Spermathecae with sperm were found in females with 69 mm CM (immature). Males and females become sexually mature at 64.41 and 98.50 mm of CM, respectively. The weight at sexual maturity was 270 g for males and 630 g for females. The values of the size and weight at sexual maturity found in this study show that males mature at smaller sizes than females. For both sexes the maturation peaks occurred in February and November 2010 and also in September 2011. The periods of maximum reproductive activity lasted about 3 months and it seems to occur every 7 10 months. Only one spent female (stage V) was found and the number of mature females was low. Presumably, mature females migrate to deep waters with complex habitat to protect the offspring, indicating that fishery by snorkeling with maximum depth of 15 meters is not reaches this part of the stock. Finally, it is noticeable the importance of the establishment of management strategies for the exploitation of O. insularis different of the ones used for O. vulgaris, once the species have distinct biological features

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a long time, we believed in the pattern that tropical and south hemisphere species have high survival. Nowadays results began to contradict this pattern, indicating the need for further studies. Despite the advanced state of the study of bird population parameters, little is known about their variation throughout the year and the factors affecting them. Reproduction, for example, is one factor that may alter adult survival rates, because during this process the breeding pair allocates resources to maintain itself to maintain offspring, making itself more susceptible to diseases and predation. The aim of this study was to estimate survival and population size of a Central and South America passerine, Tachyphonus rufus (Boddaert, 1783), testing hypotheses about the factors that define these parameters. We performed data collection between Nov/2010 and ago/2012 in 12 ha plot, in a fragment of Atlantic Forest in northeastern Brazil. We used capture-mark-recapture methods to generate estimates using Closed Design Robust model in the program MARK. We generated Multi-state models to test some assumptions inherent to Closed Robust Design. The influence of co-variables (time, rain and reproductive cycle) and the effect of transient individuals were measured. Capture, recapture and apparent survival parameters were defined by reproductive cycle, while temporary dispersal was influence by rain. The estimates showed a higher apparent survival during the non-breeding period (92% ± 1%) than during breeding (40% ± 9%), revealing a cost of reproduction and suggesting a trade-off between surviving and reproducing. The low annual survival observed (34%) did not corroborate the pattern of high rates expected for a tropical bird. The largest population size was estimated to be 56 individuals in Nov/11, explained by high recruitment of juveniles, while the lowest observed in May/12: 10 individuals, probably as a result of massive influx of competitor species. Results from this study add to the growing literature on life history of Neotropical species. We encourage studies like this especially in Brazil, where there are few information, and suggest that covariates related to habitat quality and environmental changes should be tested, so that we can generate increasingly reliable models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Master s Thesis proposes the application of Data Envelopment Analysis DEA to evaluate the performance of sales teams, based on a study of their coverage areas. Data was collected from the company contracted to distribute the products in the state of Ceará. Analyses of thirteen sales coverage areas were performed considering first the output-oriented constant return to scale method (CCR-O), then this method with assurance region (AR-O-C) and finally the method of variable returns to scale with assurance region (AR-O-V). The method used in the first approach is shown to be inappropriate for this study, since it inconveniently generates zero-valued weights, allowing that an area under evaluation obtain the maximal score by not producing. Using weight restrictions, through the assurance region methods AR-O-C and AR-O-V, decreasing returns to scale are identified, meaning that the improvement in performance is not proportional to the size of the areas being analyzed. Observing data generated by the analysis, a study is carried out, aiming to design improvement goals for the inefficient areas. Complementing this study, GDP data for each area was compared with scores obtained using AR-O-V analysis. The results presented in this work show that DEA is a useful methodology for assessing sales team performance and that it may contribute to improvements on the quality of the management process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents a description of the development model of a representation of simplified grid applied in hybrid load flow for calculation of the voltage variations in a steady-state caused by the wind farm on power system. Also, it proposes an optimal load-flow able to control power factor on connection bar and to minimize the loss. The analysis process on system, led by the wind producer, it has as base given technician supplied by the grid. So, the propose model to the simplification of the grid that allows the necessity of some knowledge only about the data referring the internal network, that is, the part of the network that interests in the analysis. In this way, it is intended to supply forms for the auxiliary in the systematization of the relations between the sector agents. The model for simplified network proposed identifies the internal network, external network and the buses of boulders from a study of vulnerability of the network, attributing them floating liquid powers attributing slack models. It was opted to apply the presented model in Newton-Raphson and a hybrid load flow, composed by The Gauss-Seidel method Zbarra and Summation Power. Finally, presents the results obtained to a developed computational environment of SCILAB and FORTRAN, with their respective analysis and conclusion, comparing them with the ANAREDE

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In northeastern semiarid, seasonality on precipitation temporal distribution, high intensity storm events and inadequate management of native vegetation can promote soil erosion. Vegetation removal causes soil surface exposure, reduces soil water storage capacity and can be the source degradation processes. In this context, this approach aims to analyze water and soil erosion processes on a 250 m2 undisturbed experimental plot with native vegetation, slope 2.5% by using 2006 and 2007 monitoring data. The site was instrumented to monitor rainfall, overland flow runoff and erosion by using a 5 m³ tank downstream the plot. Soil erosion monitoring was made by transported sediment and organic matter collection after each event. Field infiltration experiments were made at 16 points randomly distributed within the plot area by using a constant head infiltrometer during drought and rainy seasons, respectively. Infiltration data revealed high spatial and temporal variability. It was observed that during the beginning of the rainy period, 77% of the events showed runoff coefficient less than 0.05. As the rainy season began, soil water increase produced annual species germination. High intensity storms resulted in runoff coefficients varying between 0.33 and 0.42. Once the annual species was established, it was observed that approximately 39% of the events produced no runoff, which reflects an increase on soil water retention capacity caused by the vegetation. A gradual runoff reduction during the rainy season emphasizes the effect of vegetative density increase. Soil erosion observed data allowed to fit an empirical relationship involving soil loss and precipitation height, which was used to analyze the plot installation impact on soil erosion. Observed soil loss in 2006 and 2007 was 230 Kg/ha and 54 Kg/ha, respectively

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following the study of Andrade et al. (2009) on regular square lattices, here we investigate the problem of optimal path cracks (OPC) in Complex Networks. In this problem we associate to each site a determined energy. The optimum path is defined as the one among all possible paths that crosses the system which has the minimum cost, namely the sum of the energies along the path. Once the optimum path is determined, at each step, one blocks its site with highest energy, and then a new optimal path is calculated. This procedure is repeated until there is a set of blocked sites forming a macroscopic fracture which connects the opposite sides of the system. The method is applied to a lattice of size L and the density of removed sites is computed. As observed in the work by Andrade et al. (2009), the fractured system studied here also presents different behaviors depending on the level of disorder, namely weak, moderated and strong disorder intensities. In the regime of weak and moderated disorder, while the density of removed sites in the system does not depend of the size L in the case of regular lattices, in the regime of high disorder the density becomes substantially dependent on L. We did the same type of study for Complex Networks. In this case, each new site is connected with m previous ones. As in the previous work, we observe that the density of removed sites presents a similar behavior. Moreover, a new result is obtained, i.e., we analyze the dependency of the disorder with the attachment parameter m

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The seismic reflection is used on a large scale in oil exploration. In case of marine acquisition the presence of high impedance contrast at the interfaces water/air generates multiple reflections events. Such multiple events can mask primary events; thus from the interpretational viewpoint it is necessary to mitigate the multiples. In this manuscript we compare two methods of multiple attenuation: the predictive multichannel deconvolution (DPM) and the F-K filtering (FKF). DPM is based in the periodicity of the multiples while FKF is based in multiples and primaries splitting in F-K domain. DPM and FKF were applied in common-offset and CDP gathers, respectively. DPM is quite sensible to the correct identification of the period and size of the filter while FKF is quite sensible to an adequate choice of the velocity in order to split multiples and primaries events in the F-K domain. DPM is a method that is designed to act over a specific event. So, when the parameters are well selected, DPM is very efficient in removing the specified multiple. Then DPM can be optimized by applying it several times, each time with a different parameterization. A deficiency of DPM occurs when a multiple is superposed to a primary event: in this situation, DPM can attenuate also the primary event. On the other hand, FKF presents almost the same performance to all multiples that are localized in the same sector of the F-K domain. The two methods can be combined in order to take advantage of their associated potentials. In this situation, DPM is firstly applied, with a focus in the sea bed multiples. Then FKF is applied in order to attenuate the remaining multiples

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used