550 resultados para Limiar de detectabilidade


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os objetivos neste estudo foram estimar parâmetros genéticos das características categóricas musculosidade, estrutura física, aspectos raciais, conformação, ônfalo, pigmentação e sacro e predizer os valores genéticos utilizando-se a estatística bayesiana sob modelo animal de limiar, considerando diferentes idades de bovinos da raça Nelore. As informações de escores visuais foram obtidas nos anos de 2000 a 2005 de bovinos provenientes de 13 fazendas participantes do Programa Nelore Brasil. Nas análises bicaracterísticas, foram utilizados 500.000 até 1.100.000 ciclos para alcançar a convergência da cadeia de Gibbs. O descarte inicial e o intervalo amostral foram de 100.000 e 1.000 ciclos, respectivamente. As características de escores visuais avaliadas aos 8 e 22 meses de idade apresentaram estimativas de herdabilidades moderadas, indicando resposta rápida à seleção direta. Os escores visuais indicaram possibilidade de resposta rápida à seleção direta e, portanto, devem ser incorporados em programas de melhoramento genético como critérios de seleção. As estimativas de correlações genéticas entre musculosidade, estrutura física e conformação também indicam que a seleção direta para uma destas características trará progresso genético às outras. Recomenda-se utilizar os escores visuais como critérios de seleção em pelo menos duas fases de vida do animal, na desmama e ao sobreano.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective in the facility location problem with limited distances is to minimize the sum of distance functions from the facility to the customers, but with a limit on each distance, after which the corresponding function becomes constant. The problem has applications in situations where the service provided by the facility is insensitive after a given threshold distance (eg. fire station location). In this work, we propose a global optimization algorithm for the case in which there are lower and upper limits on the numbers of customers that can be served

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relevance of rising healthcare costs is a main topic in complementary health companies in Brazil. In 2011, these expenses consumed more than 80% of the monthly health insurance in Brazil. Considering the administrative costs, it is observed that the companies operating in this market work, on average, at the threshold between profit and loss. This paper presents results after an investigation of the welfare costs of a health plan company in Brazil. It was based on the KDD process and explorative Data Mining. A diversity of results is presented, such as data summarization, providing compact descriptions of the data, revealing common features and intrinsic observations. Among the key findings was observed that a small portion of the population is responsible for the most demanding of resources devoted to health care

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the main challenges in the beer industrial production is the market supply at the lowest cost and high quality, in order to ensure the expectations of customers and. consumers The beer fermentation stage represents approximately 70% of the whole time necessary to its production, having a obligatoriness of strict process controls to avoid becoming bottleneck in beer production. This stage is responsible for the formation of a series of subproducts, which are responsible for the composition of aroma/bouquet existing in beer and some of these subproducts, if produced in larger quantities, they will confer unpleasant taste and odor to the final product. Among the subproducts formed during the fermentation stage, total vicinal diketones is the main component, since it is limiting for product transfusion to the subsequent steps, besides having a low perception threshold by the consumer and giving undesirable taste and odor. Due to the instability of main raw materials quality and also process controls during fermentation, the development of alternative forms of beer production without impacting on total fermentation time and final product quality is a great challenge to breweries. In this work, a prior acidification of the pasty yeast was carried out, utilizing for that phosphoric acid, food grade, reducing yeast pH of about 5.30 to 2.20 and altering its characteristic from flocculent to pulverulent during beer fermentation. An increase of six times was observed in amount of yeast cells in suspension in the second fermentation stage regarding to fermentations by yeast with no prior acidification. With alteration on two input variables, temperature curve and cell multiplication, which goal was to minimize the maximum values for diketones detected in the fermenter tank, a reduction was obtained from peak of formed diacetyl and consequently contributed to reduction in fermentation time and total process time. Several experiments were performed with those process changes in order to verify the influence on the total fermentation time and total vicinal diketones concentration at the end of fermentation. This experiment reached as the best production result a total fermentation time of 151 hours and total vicinal diketone concentration of 0.08 ppm. The mass of yeast in suspension in the second phase of fermentation increased from 2.45 x 106 to 16.38 x 106 cells/mL of yeast, which fact is key to a greater efficiency in reducing total vicinal diketones existing in the medium, confirming that the prior yeast acidification, as well as the control of temperature and yeast cell multiplication in fermentative process enhances the performance of diketones reduction and consequently reduce the total fermentation time with diketones concentration below the expected value (Max: 0.10 ppm)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expanded Bed Adsorption (EBA) is an integrative process that combines concepts of chromatography and fluidization of solids. The many parameters involved and their synergistic effects complicate the optimization of the process. Fortunately, some mathematical tools have been developed in order to guide the investigation of the EBA system. In this work the application of experimental design, phenomenological modeling and artificial neural networks (ANN) in understanding chitosanases adsorption on ion exchange resin Streamline® DEAE have been investigated. The strain Paenibacillus ehimensis NRRL B-23118 was used for chitosanase production. EBA experiments were carried out using a column of 2.6 cm inner diameter with 30.0 cm in height that was coupled to a peristaltic pump. At the bottom of the column there was a distributor of glass beads having a height of 3.0 cm. Assays for residence time distribution (RTD) revelead a high degree of mixing, however, the Richardson-Zaki coefficients showed that the column was on the threshold of stability. Isotherm models fitted the adsorption equilibrium data in the presence of lyotropic salts. The results of experiment design indicated that the ionic strength and superficial velocity are important to the recovery and purity of chitosanases. The molecular mass of the two chitosanases were approximately 23 kDa and 52 kDa as estimated by SDS-PAGE. The phenomenological modeling was aimed to describe the operations in batch and column chromatography. The simulations were performed in Microsoft Visual Studio. The kinetic rate constant model set to kinetic curves efficiently under conditions of initial enzyme activity 0.232, 0.142 e 0.079 UA/mL. The simulated breakthrough curves showed some differences with experimental data, especially regarding the slope. Sensitivity tests of the model on the surface velocity, axial dispersion and initial concentration showed agreement with the literature. The neural network was constructed in MATLAB and Neural Network Toolbox. The cross-validation was used to improve the ability of generalization. The parameters of ANN were improved to obtain the settings 6-6 (enzyme activity) and 9-6 (total protein), as well as tansig transfer function and Levenberg-Marquardt training algorithm. The neural Carlos Eduardo de Araújo Padilha dezembro/2013 9 networks simulations, including all the steps of cycle, showed good agreement with experimental data, with a correlation coefficient of approximately 0.974. The effects of input variables on profiles of the stages of loading, washing and elution were consistent with the literature

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poucos são os estudos que possibilitam verificar quais as respostas fisiológicas são associadas ao desempenho em uma amostra de ciclistas de elite nacional. Portanto, o objetivo do presente estudo foi determinar e relacionar diferentes índices fisiológicos aeróbios com o desempenho em testes contra relógio de 4 e 20km em ciclistas de alto nível. A amostra foi composta por 14 ciclistas profissionais de elite nacional do sexo masculino (28,5 ± 4,7 anos, 73,47 ± 8,29 kg, 176 ± 6,76 cm), que realizaram um teste progressivo em laboratório para a determinação do consumo máximo de oxigênio (VO2max: 62,23 ± 8,28 ml·kg·min-1), intensidade relativa ao VO2max (iVO2max: 500,83 ± 58,65 w), economia de movimento (EM: 0,1166 ± 0,0362 ml·kg·min·w-1) e 1º e 2º limiares ventilatórios (LV1: 348,21 ± 43,26 w; LV2: 417,86 ± 60,79 w, respectivamente). Também foram submetidos a duas provas de 4 e 20km contra relógio. Para correlação entre os índices fisiológicos e desempenho, foi utilizado o coeficiente de correlação de Pearson (p< 0,05). Não foi encontrada correlação entre os índices fisiológicos (VO2max absoluto e relativo, iVO2max, EM, LV1 e LV2) e o desempenho de 4km (r= 0.38; 0.16; -0.33; 0.20; -0.50; -0.20, respectivamente) e 20km (r= 0.24; 0.01; -0.13; -0.12; -0.48; -0.19, respectivamente) contra relógio em atletas de alto nível. Estes resultados sugerem que tais variáveis não apresentam capacidade de explicar o desempenho em provas de contra relógio nas respectivas distâncias, provavelmente, devido à homogeneidade entre os sujeitos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A percepção subjetiva de esforço (PSE) é determinada de forma não invasiva e utilizada juntamente com a resposta lactacidêmica como indicadores de intensidade durante teste incremental. em campo, especialmente na natação, há dificuldades nas coletas sanguíneas; por isso, utilizam-se protocolos alternativos para estimar o limiar anaeróbio. Assim, os objetivos do estudo foram: prescrever um teste incremental baseado na PSE (Borg 6-20) visando estimar os limiares metabólicos determinados por métodos lactacidêmicos [ajuste bi-segmentado (V LL), concentração fixa-3,5mM (V3,5mM) e distância máxima (V Dmáx)]; relacionar a PSE atribuída em cada estágio com a freqüência cardíaca (FC) e com parâmetros mecânicos de nado [freqüência (FB) e amplitude de braçada (AB)], analisar a utilização da escala 6-20 na regularidade do incremento das velocidades no teste e correlacionar os limiares metabólicos com a velocidade crítica (VC). Para isso, 12 nadadores (16,4 ± 1,3 anos) realizaram dois esforços máximos (200 e 400m); os dados foram utilizados para determinar a VC, velocidade de 400m (V400m) e a freqüência crítica de braçada (FCb); e um teste incremental com intensidade inicial baseada na PSE, respectivamente, 9, 11, 13, 15 e 17; sendo monitorados em todos os estágios a FC, lactacidêmia e os tempos de quatro ciclos de braçadas e das distâncias de 20m (parte central da piscina) e 50m. Posteriormente, foram calculadas as velocidades dos estágios, FB, AB, V LL, V3,5mM e V Dmáx. Utilizaram-se ANOVA e correlação de Pearson para análise dos resultados. Não foram encontradas diferenças entre VC, V Dmáx e V LL, porém a V3,5mM foi inferior às demais velocidades (P < 0,05). Correlações significativas (P < 0,05) foram observadas entre VC versus V400m, V Dmáx e V3,5mM; V400m versus V3,5mM e V Dmáx; V Dmáx versus V LL; e no teste incremental entre PSE versus velocidade, [Lac], FC, FB e AB (P < 0,05). Concluímos que a PSE é uma ferramenta confiável no controle da velocidade dos estágios durante teste incremental na natação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study some problems related to petroleum reservoirs using methods and concepts of Statistical Physics. The thesis could be divided percolation problem in random multifractal support motivated by its potential application in modelling oil reservoirs. We develped an heterogeneous and anisotropic grid that followin two parts. The first one introduce a study of the percolations a random multifractal distribution of its sites. After, we determine the percolation threshold for this grid, the fractal dimension of the percolating cluster and the critical exponents ß and v. In the second part, we propose an alternative systematic of modelling and simulating oil reservoirs. We introduce a statistical model based in a stochastic formulation do Darcy Law. In this model, the distribution of permeabilities is localy equivalent to the basic model of bond percolation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior