958 resultados para Modeling process


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introdução: Grande parte das ações para promover a atividade física no lazer em populações tem apresentado tamanhos de efeito pequenos ou inexistentes, ou resultados inconsistentes. Abordar o problema a partir da perspectiva sistêmica pode ser uma das formas de superar esse descompasso. Objetivo: Desenvolver um modelo baseado em agentes para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos a partir da interação entre atributos psicológicos dos indivíduos e atributos dos ambientes físico construído e social em que vivem. Métodos: O processo de modelagem foi composto por três etapas: elaboração de um mapa conceitual, com base em revisão da literatura e consulta com especialistas; criação e verificação do algoritmo do modelo; e parametrização e análise de consistência e sensibilidade. Os resultados da revisão da literatura foram consolidados e relatados de acordo com os domínios da busca (aspectos psicológicos, ambiente social e ambiente físico construído). Os resultados quantitativos da consulta com os especialistas foram descritos por meio de frequências e o conteúdo das respostas questões abertas foi analisado e compilado pelo autor desta tese. O algoritmo do modelo foi criado no software NetLogo, versão 5.2.1., seguindo-se um protocolo de verificação para garantir que o algoritmo fosse implementado acuradamente. Nas análises de consistência e sensibilidade, utilizaram-se o Teste A de Vargha-Delaney, coeficiente de correlação de postos parcial, boxplots e gráficos de linha e de dispersão. Resultados: Definiram-se como elementos do mapa conceitual a intenção da pessoa, o comportamento de pessoas próximas e da comunidade, e a percepção da qualidade, do acesso e das atividades disponíveis nos locais em que atividade física no lazer pode ser praticada. O modelo representa uma comunidade hipotética contendo dois tipos de agentes: pessoas e locais em que atividade física no lazer pode ser praticada. As pessoas interagem entre si e com o ambiente construído, gerando tendências temporais populacionais de prática de atividade física no lazer e de intenção. As análises de sensibilidade indicaram que as tendências temporais de atividade física no lazer e de intenção são altamente sensíveis à influência do comportamento atual da pessoa sobre a sua intenção futura, ao tamanho do raio de percepção da pessoa e à proporção de locais em que a atividade física no lazer pode ser praticada. Considerações finais: O mapa conceitual e o modelo baseado em agentes se mostraram adequados para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos. A influência do comportamento da pessoa sobre a sua intenção, o tamanho do raio de percepção da pessoa e a proporção de locais em que a atividade física no lazer pode ser praticada são importantes determinantes da conformação e evolução dos padrões populacionais de atividade física no lazer entre adultos no modelo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Devido ao esgotamento de recursos não renováveis e o aumento das preocupações sobre as alterações climáticas, a produção de combustível renovável a partir de microalgas continua a atrair muita a atenção devido ao seu potencial para taxas rápidas de crescimento, alto teor de óleo, capacidade de crescer em cenários não convencionais e a neutralidade de carbono, além de eliminar a preocupação da disputa com as culturas alimentares. Em virtude disso, torna-se importante o desenvolvimento de um processo de conversão das microalgas em gás combustível, em destaque o gás de síntese. Visando essa importância, estudou-se a reação de gaseificação da microalga Chlorella vulgaris através de experimentos de análise termogravimétrica para estimar os parâmetros cinéticos das reações e através da simulação de um modelo matemático dinâmico termoquímico do processo usando equações de conservação de massa e energia acoplados a cinética de reação. Análises termogravimétricas isotérmicas e dinâmicas foram realizadas usando dois diferentes tipos de modelos cinéticos: isoconversionais e reações paralelas independentes (RPI). Em ambos os modelos, os valores dos parâmetros cinéticos estimados apresentaram bons ajustes e permaneceram dentro daqueles encontrados na literatura. Também foram analisados os efeitos dos parâmetros cinéticos do modelo RPI sobre a conversão da microalga no intuito de observar quais mais se pronunciavam diante a variação de valores. Na etapa de simulação do sistema controlado pelo reator solar, o modelo matemático desenvolvido foi validado por meio da comparação dos valores de temperatura e concentrações de produtos obtidos medidos experimentalmente pela literatura, apresentando boa aproximação nos valores e viabilizando, juntamente com a etapa experimental de termogravimetria, a produção de gás de síntese através da gaseificação da microalga Chlorella vulgaris.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A aquisição experimental de sinais neuronais é um dos principais avanços da neurociência. Por meio de observações da corrente e do potencial elétricos em uma região cerebral, é possível entender os processos fisiológicos envolvidos na geração do potencial de ação, e produzir modelos matemáticos capazes de simular o comportamento de uma célula neuronal. Uma prática comum nesse tipo de experimento é obter leituras a partir de um arranjo de eletrodos posicionado em um meio compartilhado por diversos neurônios, o que resulta em uma mistura de sinais neuronais em uma mesma série temporal. Este trabalho propõe um modelo linear de tempo discreto para o sinal produzido durante o disparo do neurônio. Os coeficientes desse modelo são calculados utilizando-se amostras reais dos sinais neuronais obtidas in vivo. O processo de modelagem concebido emprega técnicas de identificação de sistemas e processamento de sinais, e é dissociado de considerações sobre o funcionamento biofísico da célula, fornecendo uma alternativa de baixa complexidade para a modelagem do disparo neuronal. Além disso, a representação por meio de sistemas lineares permite idealizar um sistema inverso, cuja função é recuperar o sinal original de cada neurônio ativo em uma mistura extracelular. Nesse contexto, são discutidas algumas soluções baseadas em filtros adaptativos para a simulação do sistema inverso, introduzindo uma nova abordagem para o problema de separação de spikes neuronais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Administração Financeira surge no início do século XIX juntamente com o movimento de consolidação das grandes empresas e a formação dos mercados nacionais americano enquanto que no Brasil os primeiros estudos ocorrem a partir da segunda metade do século XX. Desde entãoo país conseguiu consolidar alguns centros de excelência em pesquisa, formar grupo significativo de pesquisadores seniores e expandir as áreas de pesquisa no campo, contudo, ainda são poucos os trabalhos que buscam retratar as características da produtividade científica em Finanças. Buscando contribuir para a melhor compreensão do comportamento produtivo dessa área a presente pesquisa estuda sua produção científica, materializada na forma de artigos digitais, publicados em 24 conceituados periódicos nacionais classificados nos estratos Qualis/CAPES A2, B1 e B2 da Área de Administração, Ciências Contábeis e Turismo. Para tanto são aplicadas a Lei de Bradford, Lei do Elitismo de Price e Lei de Lotka. Pela Lei de Bradford são identificadas três zonas de produtividade sendo o núcleo formado por três revistas, estando uma delas classificada no estrato Qualis/CAPES B2, o que evidencia a limitação de um recorte tendo como único critério a classificação Qualis/CAPES. Para a Lei do Elitismo de Price, seja pela contagem direta ou completa, não identificamos comportamento de uma elite semelhante ao apontado pela teoria e que conta com grande número de autores com apenas uma publicação.Aplicando-se o modelo do Poder Inverso Generalizado, calculado por Mínimos Quadrados Ordinários (MQO), verificamos que produtividade dos pesquisadores, quando feita pela contagem direta, se adequa àquela definida pela Lei de Lotka ao nível de α = 0,01 de significância, contudo, pela contagem completa não podemos confirmar a hipótese de homogeneidade das distribuições, além do fato de que nas duas contagens a produtividade analisada pelo parâmetro n é maior que 2 e, portanto, a produtividade do pesquisadores de finanças é menor que a defendida pela teoria.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework called joint sentiment-topic (JST) model based on latent Dirichlet allocation (LDA), which detects sentiment and topic simultaneously from text. A reparameterized version of the JST model called Reverse-JST, obtained by reversing the sequence of sentiment and topic generation in the modeling process, is also studied. Although JST is equivalent to Reverse-JST without a hierarchical prior, extensive experiments show that when sentiment priors are added, JST performs consistently better than Reverse-JST. Besides, unlike supervised approaches to sentiment classification which often fail to produce satisfactory performance when shifting to other domains, the weakly supervised nature of JST makes it highly portable to other domains. This is verified by the experimental results on data sets from five different domains where the JST model even outperforms existing semi-supervised approaches in some of the data sets despite using no labeled documents. Moreover, the topics and topic sentiment detected by JST are indeed coherent and informative. We hypothesize that the JST model can readily meet the demand of large-scale sentiment analysis from the web in an open-ended fashion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Risk and knowledge are two concepts and components of business management which have so far been studied almost independently. This is especially true where risk management is conceived mainly in financial terms, as, for example, in the banking sector. The banking sector has sophisticated methodologies for managing risk, such as mathematical risk modeling. However. the methodologies for analyzing risk do not explicitly include knowledge management for risk knowledge creation and risk knowledge transfer. Banks are affected by internal and external changes with the consequent accommodation to new business models new regulations and the competition of big players around the world. Thus, banks have different levels of risk appetite and policies in risk management. This paper takes into consideration that business models are changing and that management is looking across the organization to identify the influence of strategic planning, information systems theory, risk management and knowledge management. These disciplines can handle the risks affecting banking that arise from different areas, but only if they work together. This creates a need to view them in an integrated way. This article sees enterprise risk management as a specific application of knowledge in order to control deviation from strategic objectives, shareholders' values and stakeholders' relationships. Before and after a modeling process it necessary to find insights into how the application of knowledge management processes can improve the understanding of risk and the implementation of enterprise risk management. The article presents a propose methodology to contribute to providing a guide for developing risk modeling knowledge and a reduction of knowledge silos, in order to improve the quality and quantity of solutions related to risk inquiries across the organization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Encyclopaedia Slavica Sanctorum project aims at building a repertoire of medieval and early modern Bulgarian texts for saints in combination with ethnological data and some visual sources. A basic project task is to produce an accessible on-line digital repository of this valuable cultural heritage treasure. The paper presents the Encyclopaedia Slavica Sanctorum environment, its architecture, functional specification, application modeling process and software implementation. The paper also discusses the specifics of the ―Encyclopaedia Slavica Sanctorum‖ project and its knowledge domain. The paper also presents the integration between the Encyclopaedia Slavica Sanctorum and the Bulgarian Iconographical Digital Library, a digital library keeping rare specimens, private collections of Orthodox icons, wall- paintings and other iconographical objects, selected from difficult-to-access storages, distant churches, chapels, and monasteries, objects in a risk environment or unstable conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Na medida em que os produtos e os processos de criação são cada vez mais mediados digitalmente, existe uma reflexão recente acerca da relação entre as imagens e as ferramentas usadas para a sua produção. A relação natural e estreita entre a dimensão conceptual e a dimensão física abre a discussão ao nível da semântica e dos processos da projetação e manipulação das imagens, nas quais estão naturalmente incluídas as ferramentas CAD. Tendo o desenho um papel inequívoco e fundamental no exercício da projetação e da modelação 3D é pertinente perceber a relação e a articulação entre estas duas ferramentas. Reconhecendo o desenho como uma ferramenta de domínio físico capaz de expressar o pensamento que opera a transformação de concepções abstratas em concepções concretas, reconhecê-lo refletido na dimensão virtual através de um software CAD 3D não é trivial, já que este, na generalidade, é processado através de um pensamento cujo contexto é distante da materialidade. Metodologicamente, abordaremos esta questão procurando a verificação da hipótese através de uma proposta de exercício prático que procura avaliar o efeito que as imagens analógicas poderão ter sobre o reconhecimento e operatividade da ferramenta Blender num enquadramento académico. Pretende-se, pois, perceber como o desenho analógico pode integrar o processo de modelação 3D e qual a relação que mantém com quem elas opera. A articulação do desenho com as ferramentas de produção de design, especificamente CAD 3D, permitirá compreender na especialidade a articulação entre ferramentas de diferentes naturezas tanto no processo da projetação quanto na criação de artefactos visuais. Assim como poderá lançar a discussão acerca das estratégias pedagógicas de ensino do desenho e do 3D num curso de Design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Knowledge of the geographical distribution of timber tree species in the Amazon is still scarce. This is especially true at the local level, thereby limiting natural resource management actions. Forest inventories are key sources of information on the occurrence of such species. However, areas with approved forest management plans are mostly located near access roads and the main industrial centers. The present study aimed to assess the spatial scale effects of forest inventories used as sources of occurrence data in the interpolation of potential species distribution models. The occurrence data of a group of six forest tree species were divided into four geographical areas during the modeling process. Several sampling schemes were then tested applying the maximum entropy algorithm, using the following predictor variables: elevation, slope, exposure, normalized difference vegetation index (NDVI) and height above the nearest drainage (HAND). The results revealed that using occurrence data from only one geographical area with unique environmental characteristics increased both model overfitting to input data and omission error rates. The use of a diagonal systematic sampling scheme and lower threshold values led to improved model performance. Forest inventories may be used to predict areas with a high probability of species occurrence, provided they are located in forest management plan regions representative of the environmental range of the model projection area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a mathematical model for the vinyl acetate and n-butyl acrylate emulsion copolymerization process in batch reactors. The model is able to explain the effects of simultaneous changes in emulsifier concentration, initiator concentration, monomer-to-water ratio, and monomer feed composition on monomer conversion, copolymer composition and, to lesser extent, average particle size evolution histories. The main features of the system, such as the increase in the rate of polymerization as temperature, emulsifier, and initiator concentrations increase are correctly represented by the model. The model accounts for the basic features of the process and may be useful for practical applications, despite its simplicity and a reduced number of adjustable parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Oxidation processes can be used to treat industrial wastewater containing non-biodegradable organic compounds. However, the presence of dissolved salts may inhibit or retard the treatment process. In this study, wastewater desalination by electrodialysis (ED) associated with an advanced oxidation process (photo-Fenton) was applied to an aqueous NaCl solution containing phenol. The influence of process variables on the demineralization factor was investigated for ED in pilot scale and a correlation was obtained between the phenol, salt and water fluxes with the driving force. The oxidation process was investigated in a laboratory batch reactor and a model based on artificial neural networks was developed by fitting the experimental data describing the reaction rate as a function of the input variables. With the experimental parameters of both processes, a dynamic model was developed for ED and a continuous model, using a plug flow reactor approach, for the oxidation process. Finally, the hybrid model simulation could validate different scenarios of the integrated system and can be used for process optimization.