982 resultados para Second Step
Resumo:
Cette étude est destinée à la production et à la caractérisation des composites d’acide polylactique (PLA) et des fibres naturelles (lin, poudre de bois). Le moussage du PLA et ses composites ont également été étudiés afin d’évaluer les effets des conditions de moulage par injection et du renfort sur les propriétés finales de ces matériaux. Dans la première partie, les composites constitués de PLA et des fibres de lin ont été produits par extrusion suivit par un moulage en injection. L’effet de la variation du taux de charge (15, 25 et 40% en poids) sur les caractéristiques morphologique, mécanique, thermique et rhéologique des composites a été évalué. Dans la deuxième étape, la poudre de bois (WF) a été choisie pour renforcer le PLA. La préparation des composites de PLA et WF a été effectuée comme dans la première partie et une série complète de caractérisations morphologique, mécanique, thermique et l’analyse mécanique dynamique ont été effectués afin d’obtenir une évaluation complète de l’effet du taux de charge (15, 25 et 40% en poids) sur les propriétés du PLA. Finalement, la troisième partie de cette étude porte sur les composites de PLA et de renfort naturel afin de produire des composites moussés. Ces mousses ont été réalisées à l’aide d’un agent moussant exothermique (azodicarbonamide) via le moulage par injection, suite à un mélange du PLA et de fibres naturelles. Dans ce cas, la charge d’injection (quantité de matière injectée dans le moule: 31, 33, 36, 38 et 43% de la capacité de la presse à injection) et la concentration en poudre de bois (15, 25 et 40% en poids) ont été variées. La caractérisation des propriétés mécanique et thermique a été effectuée et les résultats ont démontré que les renforts naturels étudiés (lin et poudre de bois) permettaient d’améliorer les propriétés mécaniques des composites, notamment le module de flexion et la résistance au choc du polymère (PLA). En outre, la formation de la mousse était également efficace pour le PLA vierge et ses composites car les masses volumiques ont été significativement réduites.
Resumo:
Les changements climatiques récents ont mené à l’expansion de la répartition de plusieurs espèces méridionales, mais ont aussi causé l’extinction locale d’espèces se retrouvant à la limite de leur tolérance environnementale. Ces populations en expansion peuvent favoriser différentes stratégies d’histoire de vie en répondant à différents facteurs limitants. Dans cette thèse, je vise à déterminer et quantifier l’effet du climat et des évènements extrêmes sur le cycle de vie complet d’une espèce en expansion (le dindon sauvage) pour comprendre les changements au niveau populationnel ainsi que les mécanismes impliqués dans l’expansion de la distribution d’une espèce. J’ai défini les évènements extrêmes de pluie, d’épaisseur de neige au sol et de température, comme un évènement dont la fréquence est plus rare que le 10e et 90e percentile. En utilisant l’approche « Measure-Understand-Predict » (MUP), j’ai tout d’abord suivi trois populations le long d’un gradient latitudinal de sévérité hivernale pour mesurer l’effet de variables météorologiques sur la dynamique des populations. La survie des dindons sauvages diminuait drastiquement lorsque l’accumulation de neige au sol dépassait 30 cm pour une période de 10 jours et diminuait également avec la température. Au printemps, la persistance de la neige affectait négativement le taux d’initiation de la nidification et l’augmentation de la pluie diminuait la survie des nids. Dans une deuxième étape, j’ai examiné l’impact des évènements climatiques extrêmes et des processus démographiques impliqués dans l’expansion du dindon, liés à la théorie des histoires de vie pour comprendre la relation entre la dynamique de ces populations en expansions avec le climat. J’ai démontré que la fréquence des évènements extrêmes hivernaux et, d’une façon moins importante, les évènements extrêmes estivaux limitaient l’expansion nordique des dindons sauvages. J’ai appuyé, à l’aide de données empiriques et de modélisation, les hypothèses de la théorie classique des invasions biologiques en montrant que les populations en établissement priorisaient les paramètres reproducteurs tandis que la survie adulte était le paramètre démographique affectant le plus la dynamique des populations bien établies. De plus, les populations les plus au nord étaient composées d’individus plus jeunes ayant une espérance de vie plus faible, mais avaient un potentiel d’accroissement plus élevé que les populations établies, comme le suggère cette théorie. Finalement, j’ai projeté l’impact de la récolte sur la dynamique des populations de même que le taux de croissance de cette espèce en utilisant les conditions climatiques futures projetées par les modèles de l’IPCC. Les populations en établissement avaient un taux de récolte potentiel plus élevé, mais la proportion de mâles adultes, possédant des caractéristiques recherchées par les chasseurs, diminuait plus rapidement que dans les populations établies. Dans le futur, la fréquence des évènements extrêmes de pluie devrait augmenter tandis que la fréquence des évènements extrêmes de température hivernale et d’accumulation de neige au sol devraient diminuer après 2060, limitant probablement l’expansion nordique du dindon sauvage jusqu’en 2100. Cette thèse améliore notre compréhension des effets météorologiques et du climat sur l’expansion de la répartition des espèces ainsi que les mécanismes démographiques impliqués, et nous a permis de prédire la probabilité de l’expansion nordique de la répartition du dindon sauvage en réponse aux changements climatiques.
Resumo:
The article uses a form of content focused conversation analysis to explore processes of learning and attributing meaning when upper secondary students work with two primary source assignments in history. Empirical data was collected through audio recordings of students’ collaborative work on the assignments, which consisted in analysing two primary sources in small groups. The article addresses one primary research question: what is characteristic for the processes of learning and meaning-making when students work with two source analysis assignments? As a first step, the students’ learning processes, understood as a change in participation in the learning activity, are described. As a second step, the article describes how the students’ construct meaning when working with the primary sources. The main results are descriptions of the students’ learning, and meaning-making, processes. Based on the analysis of the students’ conversations it is suggested that the temporal aspect is discerned in a contrastive process between the present and the past in terms of values, ideas and societal conditions. In relation to the human aspect the students experienced a difficult balancing act in contrasting their own perspective with the historical actor’s perspective. However, a successful strategy was to take on the role of hypothetical historical agents. Finally, in relation to the contextual aspect once the students were involved in a process of inquiry and reasoning they managed to discern subtexts of the sources in relation to the historical context. It is suggested that certain aspects of school culture might inhibit the students’ learning of primary source analysis, as they occasionally strive to find the "right answers" rather than engaging in interpretative work. One interesting finding was the vital role of the students’ life-world perspective in creating meaning while working with the primary sources, and it is suggested that this perspective should be regarded in educational design.
Resumo:
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.
Resumo:
ntestinal aspergillosis is an infection with a very high death rate especially in leukemic patients. Here we describe a case of a 46 years old woman with acute myeloid leukemia (LAM M5) who developed intestinal primary aspergillosis. This patient was diagnosed with LAM M5 through bone marrow aspiration and bone biopsy in March 2004. Symptoms of the disease were slight persistent fever, weight loss, asthenia, anemia, thrombocytopenia,and leukocytosis with high number of blasts in peripheral blood. After induction chemotherapy with ICE (Ifosfamide, Carboplatin, Etoposide), she developed neutropenia and high fever without apparent infective foci. She was treated with empiric antibiotic therapy, nevertheless she developed an intense diarrhea and ileo-cecal distention. Diagnostic exams didn’t show signs of a focal lesion. Despite the change in antibiotic treatment and the transfusions of granulocytes and blood cells, the patient developed extremely critical conditions with persistence of neutropenia and abdominal distention. A surgical treatment was decided at the time. We treated the patient with a two steps surgical procedure. The first step was a right abdominal ileostomy followed by improvement of general conditions and then the second step a right colectomy. The histological morphology confirmed necrotizing colitis with Aspergillus ife. At that time , treatment with voriconazole was started. The general conditions of the patient improved rapidly and we were able to treat the patient with other medical anti-leukemic therapies. The patient is now cured and in healthy state. We obtained a good clinical result as only in other few cases described in literature.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive finite element simulations using preconditioned conjugate gradient solvers based on domain-decomposition. The load balancer is designed to maintain good partition aspect ratios. It can calculate a balancing flow using different versions of diffusion and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. We show how to use information from the second step to guide the first. Experimental results using Bramble's preconditioner and comparisons to existing state-ot-the-art load balancers show the benefits of the construction.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive finite element simulations using preconditioned conjugate gradient solvers based on domain-decomposition. The load balancer is designed to maintain good partition aspect ratios. It calculates a balancing flow using different versions of diffusion and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. We show how to use information from the second step to guide the first. Experimental results using Bramble's preconditioner and comparisons to existing state-of-the-art balancers show the benefits of the construction.
Resumo:
Dissertação composta por 02 artigos.
Resumo:
Interaction of ocean waves, currents and sea bed roughness is a complicated phenomena in fluid dynamic. This paper will describe the governing equations of motions of this phenomena in viscous and nonviscous conditions as well as study and analysis the experimental results of sets of physical models on waves, currents and artificial roughness, and consists of three parts: First, by establishing some typical patterns of roughness, the effects of sea bed roughness on a uniform current has been studied, as well as the manning coefficient of each type is reviewed to find the critical situation due to different arrangement. Second, the effect of roughness on wave parameters changes, such as wave height, wave length, and wave dispersion equations have been studied, third, superimposing, the waves + current + roughness patterns established in a flume, equipped with waves + currents generator, in this stage different analysis has been done to find the governing dimensionless numbers, and present the numbers to define the contortions and formulations of this phenomena. First step of the model is verified by the so called Chinese method, and the Second step by the Kamphius (1975), and third step by the van Rijn (1990) , and Brevik and Ass ( 1980), and in all cases reasonable agreements have been obtained. Finally new dimensionless parameters presented for this complicated phenomena.
Resumo:
O arroz (Oryza sativa, L.), como todos os cereais, pode ser contaminado por fungos responsáveis por danos tecnológicos, nutricionais e toxicológicos, dentre eles a produção de micotoxinas. Diversas toxinas fúngicas produzidas pelo gênero Fusarium tem sido relatadas em arroz, no entanto a fumonisina B1 (FB1) é pouco estudada neste grão. As principais características da FB1 é a alta solubilidade em solventes polares, estabilidade a altas temperaturas além de efeitos neurotóxicos e carcinogênicos. Assim o objetivo deste trabalho foi avaliar o efeito do tratamento térmico e hidrotérmico nos teores de fumonisina B1 e nas características químicas de arroz comercial. Na primeira etapa do trabalho foi adaptado um método para detecção e quantificação de FB1 em arroz cru e após cocção, por HPLC-FL. O método foi avaliado quanto aos indicativos de eficiência destacando-se o LOD (30 µg.kg-1) e a recuperação ( 90% para arroz cru e 86% pra arroz cozido). Na segunda etapa realizou-se o levantamento de ocorrência de FB1 em 05 diferentes amostras comerciais de arroz integral, branco e parboilizado da cidade de Rio Grande, RS, totalizando 9 amostras. Foi detectada a presença de FB1 em 7 das 9 amostras, sendo que os maiores índices foram encontrados em amostras de arroz parboilizado e integral apresentando níveis de contaminação entre 30 e 170 µg.kg-1. A terceira etapa do trabalho consistiu no estudo do efeito de tratamentos térmicos sobre os níveis de FB1 em amostras após aplicação de calor. Foram testados tratamento hidrotérmico com evaporação, tratamento hidrotérmico com autoclavagem e tratamento térmico seco. O maior nível de redução dos teores iniciais de FB1 foi 82,8% quando se empregou tratamento térmico seco a 125 °C/3 min. Ainda foram avaliados os efeitos do t ratamento hidrotérmico com evaporação de água na composição química e na digestibilidade protéica. Esta característica proporcionou aumento de até 100% na digestibilidade in vitro das proteínas e reduziu em média 73% do teor de contaminação com FB1.
Resumo:
Este trabalho teve como objectivo inicial o estudo de processos oxidativos avançados de forma a remediar e tratar águas contaminadas por pesticidas. No entanto, ao longo do trabalho experimental, constatou-se que os produtos resultantes da degradação de pesticidas são muitas vezes mais tóxicos do que os compostos que lhes deram origem e que, por isso, degradar um composto nem sempre é o melhor para o ambiente. Assim, neste trabalho, procurou-se estudar o processo de degradação com o objectivo de minimizar o impacto ambiental dos pesticidas na água e no ambiente em geral. A parte experimental deste trabalho foi dividida em duas etapas, sendo que, em ambas, a voltametria de onda quadrada e a espectrofotometria de UV/Vis foram os métodos de análise utilizados, para acompanhar o processo de fotodegradação. Na primeira etapa estudou-se a relação entre a estrutura química dos pesticidas MCPA, MCPP, 2.4-D e Dicloroprop e a sua fotodegradação. Soluções aquosas dos pesticidas enunciados foram submetidas a irradiação UV/vis, com incrementos variáveis de tempo de irradiação. Os resultados obtidos, nesta etapa, permitiram constatar diferenças na percentagem de degradação dos diferentes pesticidas. Dos pesticidas estudados verificou-se uma maior fotodegradação para o MCPA e MCPP seguido do Dicloroprop e finalmente o 2.4-D que se degradou menos. Os dados obtidos sugerem que a fotodegradação destes pesticidas está intimamente ligada com a estrutura das moléculas. A presença de um maior número de grupos cloro ligados ao anel aromático nos pesticidas 2,4-D e Dicloroprop faz com que estes sejam mais estáveis e por isso se degradam menos que o MCPA e o MCPP. Por outro lado, o facto de o 2,4-D apresentar um potencial de oxidação mais elevado do que o Dicloroprop, faz com que este seja mais difícil de degradar, o que justifica a diferença entre os dois. Desta forma, foi possível concluir que a estrutura dos pesticidas condiciona o processo de degradação, como esperado. Na segunda etapa, estudou-se a estabilização dos pesticidas MCPA e MCPP após encapsulação, com 2-hidroxipropil-β-ciclodextrina (HP-β-CD), em água desionizada e em água do rio. Para tal, submeteram-se as soluções aquosas dos pesticidas com e sem ciclodextrina, a irradiação UV/vis, também com incrementos variáveis de tempo. No caso do MCPA verificou-se que, tanto para água desionizada como para água do rio, que este herbicida encapsulado se degrada bastante menos do que o MCPA livre. O encapsulamento permitiu reduzir quase para metade a taxa de fotodegradação. Assim, confirmou-se que a HP-β-CD permite estabilizar este pesticida, tornando-o mais resistente à fotodegradação. Desta forma, originam-se menos produtos de degradação, os quais podem ser mais tóxicos, e reduz-se de o impacto ambiental deste herbicida. Verificou-se também que o MCPA livre se degrada mais em água do rio do que em água desionizada, provavelmente devido à matéria orgânica presente nesta água, que promove o processo de degradação. No que respeita ao MCPP também se constatou que este herbicida se degrada menos encapsulado do que livre, em água desionizada e em água do rio. Neste caso, conseguiu-se reduzir pouco a taxa de fotodegradação, mas, ainda assim se verifica uma estabilização deste pesticida através do encapsulamento. No entanto, tornou-se mais evidente a estabilização do MCPP após encapsulação em água do rio, já que apresenta uma taxa de fotodegradação menor. Este facto demonstra que a HP-β-CD permite estabilizar também este pesticida, tornando-o mais resistente à fotodegradação, e reduzindo seu impacto ambiental.
Resumo:
The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.
Resumo:
International audience
Resumo:
The Brazilian agricultural research agency has, over the years, contributed to solve social problems and to promote new knowledge, incorporating new advances and seeking technological independence of the country, through the transfer of knowledge and technology generated. However, the process of transfering of knowledge and technology has represented a big challenge for public institutions. The Embrapa is the largest and main brazilian agricultural research company, with a staff of 9.790 employees, being 2.440 researchers and an annual budget of R$ 2.52 billion. Operates through 46 decentralized research units, and coordinate of the National Agricultural Research System - SNPA. Considering that technology transfer is the consecration of effort and resources spent for the generation of knowledge and the validity of the research, this work aims to conduct an assessment of the performance of Embrapa Swine and Poultry along the production chain of broilers and propose a technology transfer model for this chain, which can be used by the Public Institutions Research – IPPs. This study is justified by the importance of agricultural research for the country, and the importance of the institution addressed. The methodology used was the case study with a qualitative approach, documentary and bibliographic research and interviews with use of semi-structured questionnaires. The survey was conducted in three stages. In the first stage, there was a diagnosis of the Technology Transfer Process (TT), the contribution of the Embrapa Swine and poultry for the supply chain for broiler. At this stage it was used bibliographical and documentary research and semi- structured interviews with agroindustrial broiler agents, researchers at Embrapa Swine and Poultry, professionals of technology transfer, from the Embrapa and Embrapa Swine and Poultry, managers of technology transfer and researchers from the Agricultural Research Service - ARS. In the second step, a model was developed for the technology transferring poultry process of Embrapa. In this phase, there were made documentary and bibliographic research and analysis of information obtained in the interviews. The third phase was to validate the proposed model in the various sectors of the broilers productive chain. The data show that, although the Embrapa Swine and Poultry develops technologies for broiler production chain, the rate of adoption of these technologies by the chain is very low. It was also diagnosed that there is a gap between the institution and the various links of the chain. It was proposed an observatory mechanism to approximate Embrapa Swine and Poultry and the agents of the broiler chain for identifying and discussing research priorities. The proposed model seeks to improve the interaction between the institution and the chain, in order to identify the chain real research demands and the search and the joint development of solutions for these demands. The proposed TT model was approved by a large majority (96.77%) of the interviewed agents who work in the various links in the chain, as well as by representatives (92%) of the entities linked to this chain. The acceptance of the proposed model demonstrates the willingness of the chain to approach Embrapa Swine and Poultry, and to seek joint solutions to existing problems.
Resumo:
O presente trabalho visa o desenvolvimento de um processo para a produção de biodiesel partindo de óleos de alta acidez, aplicando um processo em duas etapas de catálise homogênea. A primeira é a reação de esterificação etílica dos ácidos graxos livres, catalisada por H2SO4, ocorrendo no meio de triglicerídeos e a segunda é a transesterificação dos triglicerídeos remanescentes, ocorrendo no meio dos ésteres alquílicos da primeira etapa e catalisada com álcali (NaOH) e álcool etílico ou metílico. A reação de esterificação foi estudada com uma mistura modelo consistindo de óleo de soja neutro acidificado artificialmente com 15%p de ácido oleico PA. Este valor foi adotado, como referência, devido a certas gorduras regionais (óleo de mamona advinda de agricultura familiar, sebos de matadouro e óleo de farelo de arroz, etc.) apresentarem teores entre 10-20%p de ácidos graxos livres. Nas duas etapas o etanol é reagente e também solvente, sendo a razão molar mistura:álcool um dos parâmetros pesquisados nas relações 1:3, 1:6 e 1:9. Outros foram a temperatura 60 e 80ºC e a concentração percentual do catalisador, 0,5, 1,0 e 1,5%p, (em relação à massa de óleo). A combinatória destes parâmetros resultou em 18 reações. Dentre as condições reacionais estudadas, oito atingiram acidez aceitável inferior a 1,5%p possibilitando a definição das condições para aplicação ótima da segunda etapa. A melhor condição nesta etapa ocorreu quando a reação foi conduzida a 60°C com 1%p de H2SO4 e razão molar 1:6. No final da primeira etapa foram realizados tratamentos pertinentes como a retirada do catalisador e estudada sua influência sobre a acidez final, utilizando-se de lavagens com e sem adição de hexano, seguidas de evaporação ou adição de agente secante. Na segunda etapa estudaram-se as razões molares de óleo:álcool de 1:6 e 1:9 com álcool metílico e etílico, com 0,5 e 1%p de NaOH assim como o tratamento da reação (lavagem ou neutralização do catalisador) a 60°C, resultando em 16 experimentos. A melhor condição nesta segunda etapa ocorreu com 0,5%p de NaOH, razão molar óleo:etanol de 1:6 e somente as reações em que se aplicaram lavagens apresentaram índices de acidez adequados (<1,0%p) coerentes com os parâmetros da ANP.