36 resultados para Alimentação automática de lingotes
Resumo:
The nanometric powders have special features that usually result in new properties, originating applications or expanding them in various fields of knowledge. Because having a high area/volume ratio, phenomena such as superficial strength of adsorption becomes greater than the weight of the powder which makes more difficult its handling. The high power of agglomeration of these powders requires study and development of equipments to enable its management into the plasma torch. The objective of this work is to develop a powder feeder which can solve the mainly problems about insertion of powder into the thermal spray developed in the laboratory of plasmas, which are carried out with plasma torch arc not transferred (plasma spray). Therefore, it was made a aluminum s powder feeder and tests were performed to verify their operation and determine its rate of deposition by spraying powders of niobium pentoxide (Nb2O5) and titanium dioxide (TiO2) with particle sizes less than 250 mesh (<0.063 mm). We used masses of 0.5 g - 1.0 g and 1.5 g of each powder in tests lasting 15 seconds - 20 to 25 seconds for each mass. The tests were performed in two ways: at atmospheric pressure using argon gas with a flow of 9 l / min as carrier gas and through a Venturi pipe also using argon gas with a flow of 9 l / min as carrier gas and with a flow of 20 l/min as the feed gas passing through the Venturi pipe. The powder feeder developed in this paper is very easy to be handling and building, resulting in feeding rate of 0.25 cm3/min - 1.37 cm3/min. The TiO2 showed higher feeding rates than the Nb2O5 in all tests, and the best rates were obtained with tests using mass 1.5 g and time of 15 seconds, reaching feeding rate of 1.37 cm3/min. The flow of feed had low interference in feeding rate during the tests
Resumo:
This work studies the development, implementation and improvement of a macroscopic model to describe the behavior of the spouted bed dryer with continuous feeding for pastes and suspensions drying. This model is based on the CST model (Freire et al., 2009) and the model of Fernandes (2005), whose theoretical foundation is based on macroscopic mass and heat balances for the three phases involved in the process: gas, liquid and solid. Because this technique is quite relevant, the studies of modeling and simulation of spouted bed drying are essential in the analysis of the process as a whole, because through them it is possible to predict and understand the behavior of the process, which contributes significantly to more efficient project and operation. The development and understanding of the phenomena involved in the drying process can be obtained by comparing the experimental data with those from computer simulations. Such knowledge is critical for choosing properly the process conditions in order to obtain a good drying efficiency. Over the past few years, researches and development of works in the field of pastes and suspensions drying in spouted bed has been gaining ground in Brazil. The Particulate Systems Laboratory at Universidade Federal do Rio Grande do Norte, has been developing several researches and generating a huge collection of experimental data concerning the drying of fruit pulps, vegetables pastes, goat milk and suspensions of agro-industrial residues. From this collection, some data of goat milk and residue from acerola (Malpighia glabra L.) drying were collected. For the first time, these data were used for the development and validation of a model that can describe the behavior of spouted bed dryer. Thus, it was possible to model the dryer and to evaluate the influence of process variables (paste feeding, temperature and flow rate of the drying air) in the drying dynamics. We also performed water evaporation experiments in order to understand and to study the behavior of the dryer wall temperature and the evaporation rate. All these analysis will contribute to future works involving the implementation of control strategies in the pastes and suspensions drying. The results obtained in transient analysis were compared with experimental data indicating that this model well represents the process
Resumo:
The new technique for automatic search of the order parameters and critical properties is applied to several well-know physical systems, testing the efficiency of such a procedure, in order to apply it for complex systems in general. The automatic-search method is combined with Monte Carlo simulations, which makes use of a given dynamical rule for the time evolution of the system. In the problems inves¬tigated, the Metropolis and Glauber dynamics produced essentially equivalent results. We present a brief introduction to critical phenomena and phase transitions. We describe the automatic-search method and discuss some previous works, where the method has been applied successfully. We apply the method for the ferromagnetic fsing model, computing the critical fron¬tiers and the magnetization exponent (3 for several geometric lattices. We also apply the method for the site-diluted ferromagnetic Ising model on a square lattice, computing its critical frontier, as well as the magnetization exponent f3 and the susceptibility exponent 7. We verify that the universality class of the system remains unchanged when the site dilution is introduced. We study the problem of long-range bond percolation in a diluted linear chain and discuss the non-extensivity questions inherent to long-range-interaction systems. Finally we present our conclusions and possible extensions of this work
Resumo:
Avaliar os dados de aleitamento materno e alimentação complementar de crianças menores de um ano, do Rio Grande do Norte (RN), de acordo com o que é preconizado pelas políticas e ações de alimentação e nutrição. Métodos: Foi analisado o banco de dados da Chamada Neonatal do RN, pesquisa realizada pelo Ministério da Saúde em 2010. A amostra analisada foi de 837 pares mãe/filho que responderam ao questionário da pesquisa nos municípios investigados. Foram analisadas a prevalência de dados de aleitamento materno exclusivo (AME), na primeira hora de vida, parcial (AMP) e total (AMT), assim como dos alimentos ingeridos, pelas crianças, nas ultimas 24 horas anterior a entrevista. As frequências e médias foram feitas pelo comando Complex samples, no SPSS® 20.0, com IC95%. Foram estimadas as medianas de tempo de AME e AMT, assim como a mediana de tempo de introdução dos grupos de alimentos consumidos, em relação a idade da criança pelo método de probitos. Foi feita associação das probabilidades de tempo de AME e AMT com as variáveis sociodemográficas e de pré-natal (p<0.05). Resultados: Foram encontradas médias de idades de 5,28 ± 3,4 meses e 25,9 ± 6,4 anos, para crianças e mães, respectivamente. A prática de aleitamento na primeira hora de vida foi considerada boa (66,6%) e o percentual de AME (20%) razoável,segundo a Organização Mundial da Saúde, 2008. Mais da metade das crianças (55,1%) estavam em AMP. No total 60% estavam sendo amamentadas (AMT) ao final do primeiro ano de vida. O AME teve mediana de 63 dias e AMT de 358 dias. Estes dados não se diferenciaram muito entre a capital e os municípios do interior. A maioria das mães entrevistadas (73,8%) referiu ter tido orientação sobre aleitamento no pré- natal, tendo associação (p=0,03) com a probabilidade de tempo de AME, porém com pouca explicabilidade (R2= 0,011). Água ou chá, alimentos lácteos, frutas, legumes e verduras foram introduzidos precocemente com medianas menores que 180 dias. O aleitamento tende a diminuir e os alimentos tendem a aumentar o consumo de acordo com a idade da criança, com aumento exponencial do grupo calorias vazias . Conclusões: Conclui-se que mesmo com maioria das crianças amamentadas até um ano de vida, poucas estavam em AME e introduziram alimentos precocemente, não tendo resultado satisfatório frente ao preconizado pelas políticas públicas de alimentação e nutrição
Resumo:
Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B
Resumo:
Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping
Resumo:
A remoção de inconsistências em um projeto é menos custosa quando realizadas nas etapas iniciais da sua concepção. A utilização de Métodos Formais melhora a compreensão dos sistemas além de possuir diversas técnicas, como a especificação e verificação formal, para identificar essas inconsistências nas etapas iniciais de um projeto. Porém, a transformação de uma especificação formal para uma linguagem de programação é uma tarefa não trivial. Quando feita manualmente, é uma tarefa passível da inserção de erros. O uso de ferramentas que auxiliem esta etapa pode proporcionar grandes benefícios ao produto final a ser desenvolvido. Este trabalho propõe a extensão de uma ferramenta cujo foco é a tradução automática de especificações em CSPm para Handel-C. CSP é uma linguagem de descrição formal adequada para trabalhar com sistemas concorrentes. Handel-C é uma linguagem de programação cujo resultado pode ser compilado diretamente para FPGA's. A extensão consiste no aumento no número de operadores CSPm aceitos pela ferramenta, permitindo ao usuário definir processos locais, renomear canais e utilizar guarda booleana em escolhas externas. Além disto, propomos também a implementação de um protocolo de comunicação que elimina algumas restrições da composição paralela de processos na tradução para Handel-C, permitindo que a comunicação entre múltiplos processos possa ser mapeada de maneira consistente e que a mesma somente ocorra quando for autorizada.
Resumo:
Removing inconsistencies in a project is a less expensive activity when done in the early steps of design. The use of formal methods improves the understanding of systems. They have various techniques such as formal specification and verification to identify these problems in the initial stages of a project. However, the transformation from a formal specification into a programming language is a non-trivial task and error prone, specially when done manually. The aid of tools at this stage can bring great benefits to the final product to be developed. This paper proposes the extension of a tool whose focus is the automatic translation of specifications written in CSPM into Handel-C. CSP is a formal description language suitable for concurrent systems, and CSPM is the notation used in tools support. Handel-C is a programming language whose result can be compiled directly into FPGA s. Our extension increases the number of CSPM operators accepted by the tool, allowing the user to define local processes, to rename channels in a process and to use Boolean guards on external choices. In addition, we also propose the implementation of a communication protocol that eliminates some restrictions on parallel composition of processes in the translation into Handel-C, allowing communication in a same channel between multiple processes to be mapped in a consistent manner and that improper communication in a channel does not ocurr in the generated code, ie, communications that are not allowed in the system specification
Resumo:
Typically Web services contain only syntactic information that describes their interfaces. Due to the lack of semantic descriptions of the Web services, service composition becomes a difficult task. To solve this problem, Web services can exploit the use of ontologies for the semantic definition of service s interface, thus facilitating the automation of discovering, publication, mediation, invocation, and composition of services. However, ontology languages, such as OWL-S, have constructs that are not easy to understand, even for Web developers, and the existing tools that support their use contains many details that make them difficult to manipulate. This paper presents a MDD tool called AutoWebS (Automatic Generation of Semantic Web Services) to develop OWL-S semantic Web services. AutoWebS uses an approach based on UML profiles and model transformations for automatic generation of Web services and their semantic description. AutoWebS offers an environment that provides many features required to model, implement, compile, and deploy semantic Web services
Resumo:
The widespread growth in the use of smart cards (by banks, transport services, and cell phones, etc) has brought an important fact that must be addressed: the need of tools that can be used to verify such cards, so to guarantee the correctness of their software. As the vast majority of cards that are being developed nowadays use the JavaCard technology as they software layer, the use of the Java Modeling Language (JML) to specify their programs appear as a natural solution. JML is a formal language tailored to Java. It has been inspired by methodologies from Larch and Eiffel, and has been widely adopted as the de facto language when dealing with specification of any Java related program. Various tools that make use of JML have already been developed, covering a wide range of functionalities, such as run time and static checking. But the tools existent so far for static checking are not fully automated, and, those that are, do not offer an adequate level of soundness and completeness. Our objective is to contribute to a series of techniques, that can be used to accomplish a fully automated and confident verification of JavaCard applets. In this work we present the first steps to this. With the use of a software platform comprised by Krakatoa, Why and haRVey, we developed a set of techniques to reduce the size of the theory necessary to verify the specifications. Such techniques have yielded very good results, with gains of almost 100% in all tested cases, and has proved as a valuable technique to be used, not only in this, but in most real world problems related to automatic verification
Resumo:
Os sensores inteligentes são dispositivos que se diferenciam dos sensores comuns por apresentar capacidade de processamento sobre os dados monitorados. Eles tipicamente são compostos por uma fonte de alimentação, transdutores (sensores e atuadores), memória, processador e transceptor. De acordo com o padrão IEEE 1451 um sensor inteligente pode ser dividido em módulos TIM e NCAP que devem se comunicar através de uma interface padronizada chamada TII. O módulo NCAP é a parte do sensor inteligente que comporta o processador. Portanto, ele é o responsável por atribuir a característica de inteligência ao sensor. Existem várias abordagens que podem ser utilizadas para o desenvolvimento desse módulo, dentre elas se destacam aquelas que utilizam microcontroladores de baixo custo e/ou FPGA. Este trabalho aborda o desenvolvimento de uma arquitetura hardware/software para um módulo NCAP segundo o padrão IEEE 1451.1. A infra-estrutura de hardware é composta por um driver de interface RS-232, uma memória RAM de 512kB, uma interface TII, o processador embarcado NIOS II e um simulador do módulo TIM. Para integração dos componentes de hardware é utilizada ferramenta de integração automática SOPC Builder. A infra-estrutura de software é composta pelo padrão IEEE 1451.1 e pela aplicação especí ca do NCAP que simula o monitoramento de pressão e temperatura em poços de petróleo com o objetivo de detectar vazamento. O módulo proposto é embarcado em uma FPGA e para a sua prototipação é usada a placa DE2 da Altera que contém a FPGA Cyclone II EP2C35F672C6. O processador embarcado NIOS II é utilizado para dar suporte à infra-estrutura de software do NCAP que é desenvolvido na linguagem C e se baseia no padrão IEEE 1451.1. A descrição do comportamento da infra-estrutura de hardware é feita utilizando a linguagem VHDL
Resumo:
The northern portion of the Rio Grande do Norte State is characterized by intense coastal dynamics affecting areas with ecosystems of moderate to high environmental sensitivity. In this region are installed the main socioeconomic activities of RN State: salt industry, shrimp farm, fruit industry and oil industry. The oil industry suffers the effects of coastal dynamic action promoting problems such as erosion and exposure of wells and pipelines along the shore. Thus came the improvement of such modifications, in search of understanding of the changes which causes environmental impacts with the purpose of detecting and assessing areas with greater vulnerability to variations. Coastal areas under influence oil industry are highly vulnerable and sensitive in case of accidents involving oil spill in the vicinity. Therefore, it was established the geoenvironmental monitoring of the region with the aim of evaluating the entire coastal area evolution and check the sensitivity of the site on the presence of oil. The goal of this work was the implementation of a computer system that combines the needs of insertion and visualization of thematic maps for the generation of Environmental Vulnerability maps, using techniques of Business Intelligence (BI), from vector information previously stored in the database. The fundamental design interest was to implement a more scalable system that meets the diverse fields of study and make the appropriate system for generating online vulnerability maps, automating the methodology so as to facilitate data manipulation and fast results in cases of real time operational decision-making. In database development a geographic area was established the conceptual model of the selected data and Web system was done using the template database PostgreSQL, PostGis spatial extension, Glassfish Web server and the viewer maps Web environment, the GeoServer. To develop a geographic database it was necessary to generate the conceptual model of the selected data and the Web system development was done using the PostgreSQL database system, its spatial extension PostGIS, the web server Glassfish and GeoServer to display maps in Web
Resumo:
This research studies the application of syntagmatic analysis of written texts in the language of Brazilian Portuguese as a methodology for the automatic creation of extractive summaries. The automation of abstracts, while linked to the area of natural language processing (PLN) is studying ways the computer can autonomously construct summaries of texts. For this we use as presupposed the idea that switch to the computer the way a language is structured, in our case the Brazilian Portuguese, it will help in the discovery of the most relevant sentences, and consequently build extractive summaries with higher informativeness. In this study, we propose the definition of a summarization method that automatically perform the syntagmatic analysis of texts and through them, to build an automatic summary. The phrases that make up the syntactic structures are then used to analyze the sentences of the text, so the count of these elements determines whether or not a sentence will compose the summary to be generated
Resumo:
Automatic detection of blood components is an important topic in the field of hematology. The segmentation is an important stage because it allows components to be grouped into common areas and processed separately and leukocyte differential classification enables them to be analyzed separately. With the auto-segmentation and differential classification, this work is contributing to the analysis process of blood components by providing tools that reduce the manual labor and increasing its accuracy and efficiency. Using techniques of digital image processing associated with a generic and automatic fuzzy approach, this work proposes two Fuzzy Inference Systems, defined as I and II, for autosegmentation of blood components and leukocyte differential classification, respectively, in microscopic images smears. Using the Fuzzy Inference System I, the proposed technique performs the segmentation of the image in four regions: the leukocyte’s nucleus and cytoplasm, erythrocyte and plasma area and using the Fuzzy Inference System II and the segmented leukocyte (nucleus and cytoplasm) classify them differentially in five types: basophils, eosinophils, lymphocytes, monocytes and neutrophils. Were used for testing 530 images containing microscopic samples of blood smears with different methods. The images were processed and its accuracy indices and Gold Standards were calculated and compared with the manual results and other results found at literature for the same problems. Regarding segmentation, a technique developed showed percentages of accuracy of 97.31% for leukocytes, 95.39% to erythrocytes and 95.06% for blood plasma. As for the differential classification, the percentage varied between 92.98% and 98.39% for the different leukocyte types. In addition to promoting auto-segmentation and differential classification, the proposed technique also contributes to the definition of new descriptors and the construction of an image database using various processes hematological staining
Resumo:
This research has vegan groups in the city of Natal-RN as interlocutors, although I also report to other research contexts, such as those located in the cities of Recife (Pernambuco State) and Campina Grande (Paraíba State). Moved by ethical principles based on animal rights, vegans refuse to consume any product with animal origin. To the extent that consumption habits can be considered powerful elements of identification, the relationship between consumption, food, identity, and politics is an important analytical key in the development of this work. As my main theoretical question, I follow the ways by which the vegan discourse (of abolitionist character) takes shape and materializes into actions, demonstrations and political mobilization. Therefore, I aim to present an ethnography of activities performed collectively by these individuals, such as those of a more ludic character (picnics, etc.) as well as those more politically oriented, especially protests and demonstrations in public places.