896 resultados para Simulação de defeitos
Resumo:
Neste projeto foi desenvolvido um método computacional para verificação da melhor combinação tela intensificadora - filme para exames mamográficos através do estudo de suas características sensitométricas. O software, desenvolvido em ambiente Delphi para windows, apresenta na tela do microcomputador a imagem a ser obtida para cada tipo de combinação tela intensificadora - filme, utilizando imagens de \"Phantoms\" e de mamas reais. Em razão da ampla quantidade de fatores que influenciam a imagem mamográfica final, tais como magnificação, característica dos filmes e telas intensificadoras e condições da processadora, o método proposto pode proporcionar uma ampla avaliação da qualidade dos sistemas de imagem mamográfica de uma forma simples, rápida e automática, através de procedimentos de simulação computacional. A simulação investigou a influência que um determinado sistema de registro exerce sobre a qualidade da imagem, possibilitando conhecer previamente a imagem final a ser obtida com diferentes equipamentos e sistemas de registro. Dentre os sistemas investigados, três filmes (Kodak Min R 2000, Fuji UM MA-HC e Fuji ADM) e duas telas intensificadoras (Kodak Min R 2000 e Fuji AD Mammo Fine), aquele que apresentou melhores resultados, com melhor qualidade de imagens e menor exposição à paciente foi o de tela Min R 2000 com filme Min R 2000 da Kodak.
Resumo:
As rações comerciais devem ser um alimento balanceado que supra todas as exigências nutricionais nas diferentes fases da vida do animal. Sua formulação deve conter ingredientes de qualidade, em proporções adequadas. É de fundamental importância o conhecimento do valor nutricional das rações, para assegurar que o cão esteja ingerindo diariamente quantidades corretas dos nutrientes. O desbalanço de elementos essenciais e a presença de elementos tóxicos podem causar desequilíbrios nutricionais, doenças e, até mesmo, consequências fatais aos cães. O cobre é um dos vários elementos de importância a ser estudado quanto aos defeitos metabólicos nos cães. O acúmulo de cobre no fígado pode ocasionar lesões progressivas nas organelas dos hepatócitos, resultando em hepatite crônica e cirrose. Diante disso, este trabalho teve como objetivos (I) quantificação dos elementos químicos com função nutricional e com potencial tóxico presentes nas rações para cães adultos e filhotes, (II) avaliação da composição centesimal das rações amostradas, (III) avaliação da variação dos elementos químicos entre as amostras de ração de um mesmo lote de produção (IV), estudo da representatividade de pequenas porções-teste, (V) avaliação da bioacessibilidade de cobre em rações, com experimento in vitro. Os elementos químicos Al, As, Br, Ca, Cl, Co, Cr, Cs, Cu, Fe, I, K, La, Mg, Mn, Na, P, Rb, Sb, Sc, Se, Ti, U e Zn foram determinados pela análise por ativação neutrônica instrumental (INAA). A composição centesimal foi avaliada de acordo com os métodos recomendados da AOAC. A homogeneidade de distribuição dos elementos químicos nas rações foi avaliada pela análise de amostras grandes (LS-NAA). A bioacessibilidade de cobre nas rações foi estimada por meio da simulação da digestão gastrointestinal in vitro. Foi possível determinar por INAA todos os nutrientes minerais, isto é, Ca, P, K, Na, Cl, Mg, Fe, Cu, Mn, Zn, I e Se, com limites estabelecidos pela Association of American Feed Control Officials. Foram notadas em algumas rações altas concentrações de Al, Sb e U, elementos com grande potencial tóxico. Cerca de 16 % das amostras de ração apresentaram, pelo menos, um parâmetro não conforme quanto à composição centesimal. Os resultados obtidos por LS-NAA e NAA convencional mostraram variação na composição entre os sacos de ração para Br, Ca, Na e Zn, com boa concordância entre ambos os métodos. O emprego da LS-NAA combinada com NAA convencional permitiu observar que pequenas porções-teste (350 mg) de ração são representativas comparadas com aquelas de 1 kg para Br, Ca, K, Na e Zn. Em todas as rações para cães, 50 % do cobre presente estava sob a forma bioacessível
Resumo:
O Teste Baseado em Modelos (TBM) emergiu como uma estratégia promissora para minimizar problemas relacionados à falta de tempo e recursos em teste de software e visa verificar se a implementação sob teste está em conformidade com sua especificação. Casos de teste são gerados automaticamente a partir de modelos comportamentais produzidos durante o ciclo de desenvolvimento de software. Entre as técnicas de modelagem existentes, Sistemas de Transição com Entrada/Saída (do inglês, Input/Output Transition Systems - IOTSs), são modelos amplamente utilizados no TBM por serem mais expressivos do que Máquinas de Estado Finito (MEFs). Apesar dos métodos existentes para geração de testes a partir de IOTSs, o problema da seleção de casos de testes é um tópico difícil e importante. Os métodos existentes para IOTS são não-determinísticos, ao contrário da teoria existente para MEFs, que fornece garantia de cobertura completa com base em um modelo de defeitos. Esta tese investiga a aplicação de modelos de defeitos em métodos determinísticos de geração de testes a partir de IOTSs. Foi proposto um método para geração de conjuntos de teste com base no método W para MEFs. O método gera conjuntos de teste de forma determinística além de satisfazer condições de suficiência de cobertura da especificação e de todos os defeitos do domínio de defeitos definido. Estudos empíricos avaliaram a aplicabilidade e eficácia do método proposto: resultados experimentais para analisar o custo de geração de conjuntos de teste utilizando IOTSs gerados aleatoriamente e um estudo de caso com especificações da indústria mostram a efetividade dos conjuntos gerados em relação ao método tradicional de Tretmans.
Resumo:
O paradigma das redes em chip (NoCs) surgiu a fim de permitir alto grau de integração entre vários núcleos de sistemas em chip (SoCs), cuja comunicação é tradicionalmente baseada em barramentos. As NoCs são definidas como uma estrutura de switches e canais ponto a ponto que interconectam núcleos de propriedades intelectuais (IPs) de um SoC, provendo uma plataforma de comunicação entre os mesmos. As redes em chip sem fio (WiNoCs) são uma abordagem evolucionária do conceito de rede em chip (NoC), a qual possibilita a adoção dos mecanismos de roteamento das NoCs com o uso de tecnologias sem fio, propondo a otimização dos fluxos de tráfego, a redução de conectores e a atuação em conjunto com as NoCs tradicionais, reduzindo a carga nos barramentos. O uso do roteamento dinâmico dentro das redes em chip sem fio permite o desligamento seletivo de partes do hardware, o que reduz a energia consumida. Contudo, a escolha de onde empregar um link sem fio em uma NoC é uma tarefa complexa, dado que os nós são pontes de tráfego os quais não podem ser desligados sem potencialmente quebrar uma rota preestabelecida. Além de fornecer uma visão sobre as arquiteturas de NoCs e do estado da arte do paradigma emergente de WiNoC, este trabalho também propõe um método de avaliação baseado no já consolidado simulador ns-2, cujo objetivo é testar cenários híbridos de NoC e WiNoC. A partir desta abordagem é possível avaliar diferentes parâmetros das WiNoCs associados a aspectos de roteamento, aplicação e número de nós envolvidos em redes hierárquicas. Por meio da análise de tais simulações também é possível investigar qual estratégia de roteamento é mais recomendada para um determinado cenário de utilização, o que é relevante ao se escolher a disposição espacial dos nós em uma NoC. Os experimentos realizados são o estudo da dinâmica de funcionamento dos protocolos ad hoc de roteamento sem fio em uma topologia hierárquica de WiNoC, seguido da análise de tamanho da rede e dos padrões de tráfego na WiNoC.
Resumo:
As formulações baseadas na mecânica do contínuo, embora precisas até certo ponto, por vezes não podem ser utilizadas, ou não são conceitualmente corretas para o entendimento de fenômenos em escalas reduzidas. Estas limitações podem aparecer no estudo dos fenômenos tribológicos em escala nanométrica, que passam a necessitar de novos métodos experimentais, teóricos e computacionais que permitam explorar estes fenômenos com a resolução necessária. Simulações atomísticas são capazes de descrever fenômenos em pequena escala, porém, o número necessário de átomos modelados e, portanto, o custo computacional - geralmente torna-se bastante elevado. Por outro lado, os métodos de simulação associados à mecânica do contínuo são mais interessantes em relação ao custo computacional, mas não são precisos na escala atômica. A combinação entre essas duas abordagens pode, então, permitir uma compreensão mais realista dos fenômenos da tribologia. Neste trabalho, discutem-se os conceitos básicos e modelos de atrito em escala atômica e apresentam-se estudos, por meio de simulação numérica, para a análise e compreensão dos mecanismos de atrito e desgaste no contato entre materiais. O problema é abordado em diferentes escalas, e propõe-se uma abordagem conjunta entre a Mecânica do Contínuo e a Dinâmica Molecular. Para tanto, foram executadas simulações numéricas, com complexidade crescente, do contato entre superfícies, partindo-se de um primeiro modelo que simula o efeito de defeitos cristalinos no fenômeno de escorregamento puro, considerando a Dinâmica Molecular. Posteriormente, inseriu-se, nos modelos da mecânica do contínuo, considerações sobre o fenômeno de adesão. A validação dos resultados é feita pela comparação entre as duas abordagens e com a literatura.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
Dissertação para obtenção do grau de Mestre no Instituto Superior de Ciências da Saúde Egas Moniz
Resumo:
Mode of access: Internet.
Resumo:
The study aims to examine the methodology of realistic simulation as facilitator of the teaching-learning process in nursing, and is justified by the possibility to propose conditions that envisage improvements in the training process with a view to assess the impacts attributed to new teaching strategies and learning in the formative areas of health and nursing. Descriptive study with quantitative and qualitative approach, as action research, and focus on teaching from the realistic simulation of Nursing in Primary Care in an institution of public higher education. . The research was developed in the Comprehensive Care Health discipline II, this is offered in the third year of the course in order to prepare the nursing student to the stage of Primary Health Care The study population comprised 40 subjects: 37 students and 3 teachers of that discipline. Data collection was held from February to May 2014 and was performed by using questionnaires and semi structured interviews. To do so, we followed the following sequence: identification of the use of simulation in the discipline target of intervention; consultation with professors about the possibility of implementing the survey; investigation of the syllabus of discipline, objectives, skills and abilities; preparing the plan for the execution of the intervention; preparing the checklist for skills training; construction and execution of simulation scenarios and evaluation of scenarios. Quantitative data were analyzed using simple descriptive statistics, percentage, and qualitative data through collective subject discourse. A high fidelity simulation was inserted in the curriculum of the course of the research object, based on the use of standard patient. Three cases were created and executed. In the students’ view, the simulation contributed to the synthesis of the contents worked at Integral Health Care II discipline (100%), scoring between 8 and 10 (100%) to executed scenarios. In addition, the simulation has generated a considerable percentage of high expectations for the activities of the discipline (70.27%) and is also shown as a strategy for generating student satisfaction (97.30%). Of the 97.30% that claimed to be quite satisfied with the activities proposed by the academic discipline of Integral Health Care II, 94.59% of the sample indicated the simulation as a determinant factor for the allocation of such gratification. Regarding the students' perception about the strategy of simulation, the most prominent category was the possibility of prior experience of practice (23.91%). The nervousness was one of the most cited negative aspects from the experience in simulated scenarios (50.0%). The most representative positive point (63.89%) pervades the idea of approximation with the reality of Primary Care. In addition, professors of the discipline, totaling 3, were trained in the methodology of the simulation. The study highlighted the contribution of realistic simulation in the context of teaching and learning in nursing and highlighted this strategy while mechanism to generate expectation and satisfaction among undergraduate nursing students
Resumo:
The distribution and mobilization of fluid in a porous medium depend on the capillary, gravity, and viscous forces. In oil field, the processes of enhanced oil recovery involve change and importance of these forces to increase the oil recovery factor. In the case of gas assisted gravity drainage (GAGD) process is important to understand the physical mechanisms to mobilize oil through the interaction of these forces. For this reason, several authors have developed physical models in laboratory and core floods of GAGD to study the performance of these forces through dimensionless groups. These models showed conclusive results. However, numerical simulation models have not been used for this type of study. Therefore, the objective of this work is to study the performance of capillary, viscous and gravity forces on GAGD process and its influence on the oil recovery factor through a 2D numerical simulation model. To analyze the interplay of these forces, dimensionless groups reported in the literature have been used such as Capillary Number (Nc), Bond number (Nb) and Gravity Number (Ng). This was done to determine the effectiveness of each force related to the other one. A comparison of the results obtained from the numerical simulation was also carried out with the results reported in the literature. The results showed that before breakthrough time, the lower is the injection flow rate, oil recovery is increased by capillary force, and after breakthrough time, the higher is the injection flow rate, oil recovery is increased by gravity force. A good relationship was found between the results obtained in this research with those published in the literature. The simulation results indicated that before the gas breakthrough, higher oil recoveries were obtained at lower Nc and Nb and, after the gas breakthrough, higher oil recoveries were obtained at lower Ng. The numerical models are consistent with the reported results in the literature
Resumo:
Primary processing of natural gas platforms as Mexilhão Field (PMXL-1 ) in the Santos Basin, where monoethylene glycol (MEG) has been used to inhibit the formation of hydrates, present operational problems caused by salt scale in the recovery unit of MEG. Bibliographic search and data analysis of salt solubility in mixed solvents, namely water and MEG, indicate that experimental reports are available to a relatively restricted number of ionic species present in the produced water, such as NaCl and KCl. The aim of this study was to develop a method for calculating of salt solubilities in mixed solvent mixtures, in explantion, NaCl or KCl in aqueous mixtures of MEG. The method of calculating extend the Pitzer model, with the approach Lorimer, for aqueous systems containing a salt and another solvent (MEG). Python language in the Integrated Development Environment (IDE) Eclipse was used in the creation of the computational applications. The results indicate the feasibility of the proposed calculation method for a systematic series of salt (NaCl or KCl) solubility data in aqueous mixtures of MEG at various temperatures. Moreover, the application of the developed tool in Python has proven to be suitable for parameter estimation and simulation purposes
Resumo:
This work consists of the integrated design process analyses with thermal energetic simulation during the early design stages, based on six practical cases. It aims to schematize the integration process, identifying the thermal energetic analyses contributions at each design phase and identifying the highest impact parameters on building performance. The simulations were run in the DesignBuilder energy tool, which has the same EnergyPlus engine, validated. This tool was chosen due to the flexible and user friendly graphic interface for modeling and output assessment, including the parametric simulation to compare design alternatives. The six case studies energy tools are three architectural and three retrofit projects, and the author the simulations as a consultant or as a designer. The case studies were selected based on the commitment of the designers in order to achieve performance goals, and their availability to share the process since the early pre-design analyses, allowing schematizing the whole process, and supporting the design decisions with quantifications, including energy targets. The thermoenergetic performance analyses integration is feasible since the early stages, except when only a short time is available to run the simulations. The simulation contributions are more important during the sketch and detail phases. The predesign phase can be assisted by means of reliable bioclimatic guidelines. It was verified that every case study had two dominant design variables on the general performance. These variables differ according the building characteristics and always coincide with the local bioclimatic strategies. The adaptation of alternatives to the design increases as earlier it occurs. The use of simulation is very useful: to prove and convince the architects; to quantify the cost benefits and payback period to the retrofit designer; and to the simulator confirm the desirable result and report the performance to the client
Resumo:
The city of Natal has a significant daylight availability, although it use isn’t systematically explored in schools architecture. In this context, this research aims to determine procedures for the analysis of the daylight performance in school design in Natal-RN. The method of analysis is divided in Visible Sky Factor (VSF), simulating and analyzing the results. The annual variation of the daylight behavior requires the adoption of dynamic simulation as data procedure. The classrooms were modelled in SketchUp, simulated in Daysim program and the results were assessed by means of spreadsheets in Microsoft Excel. The classrooms dimensions are 7.20mx 7.20m, with windows-to-wall-ratio (WWR) of 20%, 40% and 50%, and with different shading devices, such as standard horizontal overhang, sloped overhang, standard horizontal overhang with side view protection, standard horizontal overhang with a dropped edge, standard horizontal overhang with three horizontal louvers, double standard horizontal overhang, double standard horizontal overhang with three horizontal louvers, plus the use of shelf light in half the models with WWR of 40% and 50%. The data was organized in spreadsheets, with two intervals of UDI: between 300lux and 2000 lux and between 300lux and 3000lux. The simulation was performed with the weather file of 2009 to the city of NatalRN. The graphical outputs are illuminance curves, isolines of UDI among 300lux and 2000 lux and tables with index of occurrences of glare and to an UDI among 300lux 3000lux. The best UDI300-2000lux performance was evidenced to: Phase 1 (models with WWR of 20%), Phase 2 (models with WWR of 40% and 50% with light shelf). The best UDI300-3000lux performance was evidenced to: Phase 1 (models with WWR of 20% and 40% with light shelf) and Phase 2 (models with WWR of 40% and 50% with light shelf). The outputs prove that the daylight quality mainly depends on the shading system efficacy to avoid the glare occurrence, which determines the daylight discomfort. The bioclimatic recommendations of big openings with partial shading (with an opening with direct sunlight) resulted in illuminances level higher than the acceptable upper threshold. The improvement of the shading system percentage (from 73% to 91%) in medium-size of openings (WWR 40% and 50%) reduced or eliminate the glare occurrence without compromising the daylight zone depth (7.20m). The passive zone was determined for classrooms with satisfactory daylight performance, it was calculated the daylight zone depth rule-of-thumb with the ratio between daylight zone depth and the height of the window for different size of openings. The ratio ranged from 1.54 to 2.57 for WWR of 20%, 40% and 50% respectively. There was a reduction or elimination of glare in the passive area with light shelf, or with awning window shading.
Resumo:
The hospital is a place of complex actions, where several activities for serving the population are performed such as: medical appointments, exams, surgeries, emergency care, admission in wards and ICUs. These activities are mixed with anxiety, impatience, despair and distress of patients and their families, issues involving emotional balance both for professionals who provide services for them as for people cared by them. The healthcare crisis in Brazil is getting worse every year and today, constitutes a major problem for private hospitals. The patient that comes to emergencies progressively increase, and in contrast, there is no supply of hospital beds in the same proportion, causing overcrowding, declines in the quality of care delivered to patients, drain of professionals of the health area and difficulty in management the beds. This work presents a study that seeks to create an alternative tool that can contribute to the management of a private hospital beds. It also seeks to identify potential issues or deficiencies and therefore make changes in flow for an increase in service capacity, thus reducing costs without compromising the quality of services provided. The tool used was the Computational Simulation –based in discrete event, which aims to identify the main parameters to be considered for a proper modeling of this system. This study took as reference the admission of a private hospital, based on the current scenario, where your apartments are in saturation level as its occupancy rate. The relocation of project beds aims to meet the growing demand for surgeries and hospital admissions observed by the current administration.