67 resultados para Colapso de dados


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nonionic surfactants when in aqueous solution, have the property of separating into two phases, one called diluted phase, with low concentration of surfactant, and the other one rich in surfactants called coacervate. The application of this kind of surfactant in extraction processes from aqueous solutions has been increasing over time, which implies the need for knowledge of the thermodynamic properties of these surfactants. In this study were determined the cloud point of polyethoxylated surfactants from nonilphenolpolietoxylated family (9,5 , 10 , 11, 12 and 13), the family from octilphenolpolietoxylated (10 e 11) and polyethoxylated lauryl alcohol (6 , 7, 8 and 9) varying the degree of ethoxylation. The method used to determine the cloud point was the observation of the turbidity of the solution heating to a ramp of 0.1 ° C / minute and for the pressure studies was used a cell high-pressure maximum ( 300 bar). Through the experimental data of the studied surfactants were used to the Flory - Huggins models, UNIQUAC and NRTL to describe the curves of cloud point, and it was studied the influence of NaCl concentration and pressure of the systems in the cloud point. This last parameter is important for the processes of oil recovery in which surfactant in solution are used in high pressures. While the effect of NaCl allows obtaining cloud points for temperatures closer to the room temperature, it is possible to use in processes without temperature control. The numerical method used to adjust the parameters was the Levenberg - Marquardt. For the model Flory- Huggins parameter settings were determined as enthalpy of the mixing, mixing entropy and the number of aggregations. For the UNIQUAC and NRTL models were adjusted interaction parameters aij using a quadratic dependence with temperature. The parameters obtained had good adjust to the experimental data RSMD < 0.3 %. The results showed that both, ethoxylation degree and pressure increase the cloudy points, whereas the NaCl decrease

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present work are established initially the fundamental relationships of thermodynamics that govern the equilibrium between phases, the models used for the description of the behavior non ideal of the liquid and vapor phases in conditions of low pressures. This work seeks the determination of vapor-liquid equilibrium (VLE) data for a series of multicomponents mixtures of saturated aliphatic hydrocarbons, prepared synthetically starting from substances with analytical degree and the development of a new dynamic cell with circulation of the vapor phase. The apparatus and experimental procedures developed are described and applied for the determination of VLE data. VLE isobarics data were obtained through a Fischer's ebulliometer of circulation of both phases, for the systems pentane + dodecane, heptane + dodecane and decane + dodecane. Using the two new dynamic cells especially projected, of easy operation and low cost, with circulation of the vapor phase, data for the systems heptane + decane + dodecane, acetone + water, tween 20 + dodecane, phenol + water and distillation curves of a gasoline without addictive were measured. Compositions of the equilibrium phases were found by densimetry, chromatography, and total organic carbon analyzer. Calibration curves of density versus composition were prepared from synthetic mixtures and the behavior excess volumes were evaluated. The VLE data obtained experimentally for the hydrocarbon and aqueous systems were submitted to the test of thermodynamic consistency, as well as the obtained from the literature data for another binary systems, mainly in the bank DDB (Dortmund Data Bank), where the Gibbs-Duhem equation is used obtaining a satisfactory data base. The results of the thermodynamic consistency tests for the binary and ternary systems were evaluated in terms of deviations for applications such as model development. Later, those groups of data (tested and approved) were used in the KijPoly program for the determination of the binary kij parameters of the cubic equations of state original Peng-Robinson and with the expanded alpha function. These obtained parameters can be applied for simulation of the reservoirs petroleum conditions and of the several distillation processes found in the petrochemistry industry, through simulators. The two designed dynamic cells used equipments of national technology for the determination Humberto Neves Maia de Oliveira Tese de Doutorado PPGEQ/PRH-ANP 14/UFRN of VLE data were well succeed, demonstrating efficiency and low cost. Multicomponents systems, mixtures of components of different molecular weights and also diluted solutions may be studied in these developed VLE cells

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the use of argumentation as a discursive element in digital media, particularly blogs. We analyzed the Blog "Fatos e Dados" [Facts and Data], created by Petrobras in the context of allegations of corruption that culminated in the installation of a Parliamentary Commission of Inquiry to investigate the company within the Congress. We intend to understand the influence that the discursive elements triggered by argumentation exercise in blogs and about themes scheduling. To this end, we work with notions of argumentation in dialogue with questions of language and discourse from the work of Charaudeau (2006), Citelli (2007), Perelman & Olbrechts-Tyteca (2005), Foucault (2007, 2008a), Bakhtin (2006) and Breton (2003). We also observe our subject from the perspective of social representations, where we seek to clarify concepts such as public image and the use of representations as argumentative elements, considering the work of Moscovici (2007). We also consider reflections about hypertext and the context of cyberculture, with authors such as Levy (1993, 1999, 2003), Castells (2003) and Chartier (1999 and 2002), and issues of discourse analysis, especially in Orlandi (1988, 1989, 1996 and 2001), as well as Foucault (2008b). We analyzed 118 posts published in the first 30 days of existence of the blog "Fatos e Dados" (between 2 June and 1 July 2009), and analyzed in detail the top ten. A corporate blog aims to defend the points of view and public image of the organization, and, therefore, uses elements of social representations to build their arguments. It goes beyond the blog, as the main news criteria, including the posts we reviewed, the credibility of Petrobras as the source of information. In the posts analyzed, the news values of innovation and relevance also arise. The controversy between the Blog and the press resulted from an inadequacy and lack of preparation of media to deal with a corporate blog that was able to explore the characteristics of liberation of the emission pole in cyberculture. The Blog is a discursive manifestation in a concrete historical situation, whose understanding and attribution of meaning takes place from the social relations between subjects that, most of the time, place themselves in discursive and ideological dispute between each other - this dispute also affects the movements of reading and reading production. We conclude that intersubjective relationships that occur in blogs change, in the form of argumentative techniques used, the notions of news criteria, interfering with scheduling of news and organization of information in digital media outlets. It is also clear the influence that the discursive elements triggered by argumentation exercise in digital media, trying to resize and reframe frames of reality conveyed by it in relation to the subject-readers. Blogs have become part of the scenario information with the emergence of the Internet and are able to interfere in a more effective way to organize the scheduling of media from the conscious utilization of argumentative elements in their posts

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to analyze the effect of migration on the income differential between northeastern migrants and nonmigrants and there by verify that the immigrants make up a group or not positively selected. The assumption that will be tested is that the presence of these immigrants affects income inequality in the region receptor, which may explain part of the high-stopping inequality in the Brazilian Northeast. The study is based on the literature selectivity migration introduced by Roy (1951), Borjas (1987) and Chiswick (1999). Does the estimated wage equation Mincer (1974) through the method of OLS, using information from the microdata sample of the 2010 Census, the Brazilian Institute of Geography and Statistics (IBGE). The results which correspond to the comparison of socioeconomic profile, showed that immigrants are more qualified and, on average, better paid than non-migrants. With the estimation of the model, it was found that, keeping all other variables constant, the income that immigrants earn is 14.43% higher than that of non-migrants. Thus, there was existence of positive selectivity in migration directed to the Northeast

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work is to identify, to chart and to explain the evolution of the soil occupation and the envirionment vulnerability of the areas of Canto do Amaro and Alto da Pedra, in the city of Mossoró-RN, having as base analyzes it multiweather of images of orbital remote sensors, the accomplishment of extensive integrated works of field to a Geographic Information System (GIS). With the use of inserted techniques of it analyzes space inserted in a (GIS), and related with the interpretation and analyzes of products that comes from the Remote Sensoriamento (RS.), make possible resulted significant to reach the objectives of this works. Having as support for the management of the information, the data set gotten of the most varied sources and stored in digital environment, it comes to constitute the geographic data base of this research. The previous knowledge of the spectral behavior of the natural or artificial targets, and the use of algorithms of Processing of Digital images (DIP), it facilitates the interpretation task sufficiently and searchs of new information on the spectral level. Use as background these data, was generated a varied thematic cartography was: Maps of Geology, Geomorfológicals Units soils, Vegetation and Use and Occupation of the soil. The crossing in environment SIG, of the above-mentioned maps, generated the maps of Natural and Vulnerability envirionmental of the petroliferous fields of I Canto do Amaro and Alto da Pedra-RN, working in an ambient centered in the management of waters and solid residuos, as well as the analysis of the spatial data, making possible then a more complex analysis of the studied area

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We presented in this work two methods of estimation for accelerated failure time models with random e_ects to process grouped survival data. The _rst method, which is implemented in software SAS, by NLMIXED procedure, uses an adapted Gauss-Hermite quadrature to determine marginalized likelihood. The second method, implemented in the free software R, is based on the method of penalized likelihood to estimate the parameters of the model. In the _rst case we describe the main theoretical aspects and, in the second, we briey presented the approach adopted with a simulation study to investigate the performance of the method. We realized implement the models using actual data on the time of operation of oil wells from the Potiguar Basin (RN / CE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dentre os vários aspectos da saúde do idoso, a saúde bucal merece atenção especial pelo fato de que, historicamente, nos serviços odontológicos, não se considera esse grupo populacional como prioridade de atenção. Por isso, se faz necessária a produção de um indicador multidimensional capaz de mensurar todas as alterações bucais encontradas em um idoso, facilitando a categorização da saúde bucal como um todo. Tal indicador representará um importante instrumento capaz de elencar prioridades de atenção voltadas à população idosa. Portanto, o estudo em questão propõe a produção e validação de um indicador de saúde bucal a partir dos dados secundários coletados pelo projeto SB Brasil 2010 referente ao grupo etário de 65 a 74 anos. A amostra foi representada pelos 7619 indivíduos do grupo etário de 65 a 74 anos que participaram da pesquisa nas 5 (cinco) regiões do Brasil. Tais indivíduos foram submetidos à avaliação epidemiológica das condições de saúde bucal, a partir dos índices CPO-d, CPI e PIP. Além disso, verificou-se o uso e necessidade de prótese, bem como características sociais, econômicas e demográficas. Uma análise fatorial identificou um número relativamente pequeno de fatores comuns, através da análise de componentes principais. Após a nomenclatura dos fatores, foi realizada a soma dos escores fatoriais por indivíduo. Por último, a dicotomização dessa soma nos forneceu o indicador de saúde bucal proposto. Para esse estudo foram incluídas na análise fatorial 12 variáveis de saúde bucal oriundas do banco de dados do SB Brasil 2010 e, também 3 variáveis socioeconômicas e demográficas. Com base no critério de Kaiser, observa-se que foram retidos cinco fatores que explicaram 70,28% da variância total das variáveis incluídas no modelo. O fator 1 (um) explica sozinho 32,02% dessa variância, o fator 2 (dois) 14,78%, enquanto que os fatores 3 (três), 4 (quatro) e 5 (cinco) explicam 8,90%, 7,89% e 6,68%, respectivamente. Por meio das cargas fatoriais, o fator um foi denominado dente hígido e pouco uso de prótese , o dois doença periodontal presente , o três necessidade de reabilitação , já o quarto e quinto fator foram denominados de cárie e condição social favorável , respectivamente. Para garantir a representatividade do indicador proposto, realizou-se uma segunda análise fatorial em uma subamostra da população de idosos investigados. Por outro lado, a aplicabilidade do indicador produzido foi testada por meio da associação do mesmo com outras variáveis do estudo. Por fim, Cabe ressaltar que, o indicador aqui produzido foi capaz de agregar diver sas informações a respeito da saúde bucal e das condições sociais desses indivíduos, traduzindo assim, diversos dados em uma informação simples, que facilita o olhar dos gestores de saúde sobre as reais necessidades de intervenções em relação à saúde bucal de determinada população

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na computação científica é necessário que os dados sejam o mais precisos e exatos possível, porém a imprecisão dos dados de entrada desse tipo de computação pode estar associada às medidas obtidas por equipamentos que fornecem dados truncados ou arredondados, fazendo com que os cálculos com esses dados produzam resultados imprecisos. Os erros mais comuns durante a computação científica são: erros de truncamentos, que surgem em dados infinitos e que muitas vezes são truncados", ou interrompidos; erros de arredondamento que são responsáveis pela imprecisão de cálculos em seqüências finitas de operações aritméticas. Diante desse tipo de problema Moore, na década de 60, introduziu a matemática intervalar, onde foi definido um tipo de dado que permitiu trabalhar dados contínuos,possibilitando, inclusive prever o tamanho máximo do erro. A matemática intervalar é uma saída para essa questão, já que permite um controle e análise de erros de maneira automática. Porém, as propriedades algébricas dos intervalos não são as mesmas dos números reais, apesar dos números reais serem vistos como intervalos degenerados, e as propriedades algébricas dos intervalos degenerados serem exatamente as dos números reais. Partindo disso, e pensando nas técnicas de especificação algébrica, precisa-se de uma linguagem capaz de implementar uma noção auxiliar de equivalência introduzida por Santiago [6] que ``simule" as propriedades algébricas dos números reais nos intervalos. A linguagem de especificação CASL, Common Algebraic Specification Language, [1] é uma linguagem de especificação algébrica para a descrição de requisitos funcionais e projetos modulares de software, que vem sendo desenvolvida pelo CoFI, The Common Framework Initiative [2] a partir do ano de 1996. O desenvolvimento de CASL se encontra em andamento e representa um esforço conjunto de grandes expoentes da área de especificações algébricas no sentido de criar um padrão para a área. A dissertação proposta apresenta uma especificação em CASL do tipo intervalo, munido da aritmética de Moore, afim de que ele venha a estender os sistemas que manipulem dados contínuos, sendo possível não só o controle e a análise dos erros de aproximação, como também a verificação algébrica de propriedades do tipo de sistema aqui mencionado. A especificação de intervalos apresentada aqui foi feita apartir das especificações dos números racionais proposta por Mossakowaski em 2001 [3] e introduz a noção de igualdade local proposta por Santiago [6, 5, 4]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML