788 resultados para Recursive logit
Resumo:
OBJETIVO: Estimar a prevalência e os fatores associados de pré-hipertensão e hipertensão arterial em adultos. MÉTODOS: Estudo transversal, de base populacional, com 1.720 adultos em Florianópolis, SC, de setembro de 2009 a janeiro de 2010. Informações demográficas, socioeconômicas, comportamentos relacionados à saúde, medidas antropométricas, morbidades e autopercepção de saúde foram coletadas por meio de entrevistas domiciliares. Níveis de pressão arterial sistólica e diastólica foram avaliados. Adicionalmente perguntou-se sobre a ingestão de medicamentos e diagnóstico médico para hipertensão. A variável dependente foi categorizada em normal, pré-hipertensão e hipertensão arterial. A regressão logística politômica múltipla foi empregada com uso do modelo Logit multinomial. RESULTADOS: A prevalência de pré-hipertensão e hipertensão arterial foi de 36,1% (IC95% 33,3;38, 8) e 40,1% (IC95% 36,6;43,5), respectivamente. A análise de regressão politômica múltipla revelou que a pré-hipertensão esteve associada a homens, cor de pele preta, faixa etária acima de 50 anos, inativos fisicamente no lazer e com pré-obesidade. A hipertensão arterial esteve associada a homens, cor de pele preta, faixa etária acima de 40 anos, tercil intermediário de renda per capita, escolaridade menor que 12 anos, inativos fisicamente, pré-obesidade, obesidade, circunferência da cintura elevada e percepção negativa do estado de saúde. CONCLUSÕES: Para controlar a hipertensão arterial na população adulta de Florianópolis, é urgente haver políticas públicas eficazes para o combate à pré-hipertensão.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Matemática, especialidade de Estatística, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
IET Control Theory & Applications, Vol. 1, Nº 1
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Vishnu is a tool for XSLT visual programming in Eclipse - a popular and extensible integrated development environment. Rather than writing the XSLT transformations, the programmer loads or edits two document instances, a source document and its corresponding target document, and pairs texts between then by drawing lines over the documents. This form of XSLT programming is intended for simple transformations between related document types, such as HTML formatting or conversion among similar formats. Complex XSLT programs involving, for instance, recursive templates or second order transformations are out of the scope of Vishnu. We present the architecture of Vishnu composed by a graphical editor and a programming engine. The editor is an Eclipse plug-in where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program. The design of the engine and the process of creation of an XSLT program from examples are also detailed. It starts with the generation of an initial transformation that maps source document to the target document. This transformation is fed to a rewrite process where each step produces a refined version of the transformation. Finally, the transformation is simplified before being presented to the programmer for further editing.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Dissertação apresentada como requisito parcial de obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
Proceedings of the International Conference on Computational Cybernetics, Vienna University of Technology, August 30 - September 1, 2004
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação