947 resultados para Negative stiffness structure, snap through, elastomers, hyperelastic model, root cause analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing a theoretical framework for pervasive information environments is an enormous goal. This paper aims to provide a small step towards such a goal. The following pages report on our initial investigations to devise a framework that will continue to support locative, experiential and evaluative data from ‘user feedback’ in an increasingly pervasive information environment. We loosely attempt to outline this framework by developing a methodology capable of moving from rapid-deployment of software and hardware technologies, towards a goal of realistic immersive experience of pervasive information. We propose various technical solutions and address a range of problems such as; information capture through a novel model of sensing, processing, visualization and cognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé : La maladie osseuse de Paget (MP) est un désordre squelettique caractérisé par une augmentation focale et désorganisée du remodelage osseux. Les ostéoclastes (OCs) de MP sont plus larges, actifs et nombreux, en plus d’être résistants à l’apoptose. Même si la cause précise de la MP demeure inconnue, des mutations du gène SQSTM1, codant pour la protéine p62, ont été décrites dans une proportion importante de patients avec MP. Parmi ces mutations, la substitution P392L est la plus fréquente, et la surexpression de p62P392L dans les OCs génère un phénotype pagétique partiel. La protéine p62 est impliquée dans de multiples processus, allant du contrôle de la signalisation NF-κB à l’autophagie. Dans les OCs humains, un complexe multiprotéique composé de p62 et des kinases PKCζ et PDK1 est formé en réponse à une stimulation par Receptor Activator of Nuclear factor Kappa-B Ligand (RANKL), principale cytokine impliquée dans la formation et l'activation des OCs. Nous avons démontré que PKCζ est impliquée dans l’activation de NF-κB induite par RANKL dans les OCs, et dans son activation constitutive en présence de p62P392L. Nous avons également observé une augmentation de phosphorylation de Ser536 de p65 par PKCζ, qui est indépendante d’IκB et qui pourrait représenter une voie alternative d'activation de NF-κB en présence de la mutation de p62. Nous avons démontré que les niveaux de phosphorylation des régulateurs de survie ERK et Akt sont augmentés dans les OCs MP, et réduits suite à l'inhibition de PDK1. La phosphorylation des substrats de mTOR, 4EBP1 et la protéine régulatrice Raptor, a été évaluée, et une augmentation des deux a été observée dans les OCs pagétiques, et est régulée par l'inhibition de PDK1. Également, l'augmentation des niveaux de base de LC3II (associée aux structures autophagiques) observée dans les OCs pagétiques a été associée à un défaut de dégradation des autophagosomes, indépendante de la mutation p62P392L. Il existe aussi une réduction de sensibilité à l’induction de l'autophagie dépendante de PDK1. De plus, l’inhibition de PDK1 induit l’apoptose autant dans les OCs contrôles que pagétiques, et mène à une réduction significative de la résorption osseuse. La signalisation PDK1/Akt pourrait donc représenter un point de contrôle important dans l’activation des OCs pagétiques. Ces résultats démontrent l’importance de plusieurs kinases associées à p62 dans la sur-activation des OCs pagétiques, dont la signalisation converge vers une augmentation de leur survie et de leur fonction de résorption, et affecte également le processus autophagique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PEDRO, Edilson da Silva Pedro. Gestão tecnológica: um estudo de caso no setor sucroalcooleiro. 2004. 145f. Dissertaçao (Mestrado em Engenharia de Producao) - Universidade Federal de Sao Carlos, Sao Carlos, 2004.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on strategic management of resources for small consulting firms which are international or planning to operate internationally is scarce or not existing at all. It is however important to start generating the theories that will support managers in their decision making and planning. This thesis investigates how do small management consulting firms manage their internal and external resources while operating in international markets. Throughout the study, aspects related to internal and external resources management as well as management strategies in these firms have been examined in relations to firm’s internationalisation activities. A qualitative analysis was carried out whereby four small consulting firms in Export Management Consulting, Integration Services Consulting-, Software Development Consulting- and Business Solutions Consulting companies were interviewed. In order to generate the holistic understanding of the study, the thesis writer selected Value Enhanced Collaborative Working (VECW) model as an analysis framework. The model focuses on people, processes and tools as key elements that small consulting firms consider when thinking about managing their international operations. The findings of the thesis reveal that, internal and external resources in the interviewed consulting firms are viewed similarly, but managed differently depending on the nature and size of the firm. Firm’s management strategies concentrate on the adequate utilisation of the employees' motivation and experiences, effective stakeholders' management, various administrative evaluation processes and tools, the ability to realise useful networks, constant improvement through employee trainings, employees and customers’ feedbacks as well as enhanced freedom in order to support employees’ creativity. Further research to examine functional administrative tools and tools that small consulting firms could use to assess their resource capabilities when planning to become international would benefit the smaller businesses in terms of resources management and certainty in planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainability in buildings, while reducing the impact on the environment, contributes to the promotion of social welfare, to increase the health and productivity of occupants. The search for a way of build that meets the aspirations and development of humanity without, however, represent degradation of the environment, has become the great challenge of contemporary architecture. It is considered that the incorporation of principles that provide a sustainable building with careful choices of design solutions contribute to a better economic and thermal performance of the building, as well as functional and psychological comfort to its users. Based on this general understanding, this paper presents an architecture project aimed to health care whose the solutions adopted follow carefully the relevant legislation and sets his sights on the theme of sustainability. The methodology began with studies on the themes of verification service of deaths, sustainability and those application in construction developed through research in academic studies and analysis of architectural projects, using them like reference for the solutions adopted. Within the project analysis was performed a visit to the verification service of deaths in the city of Palmas in Tocantins, subsidizing information that, plus the relevant legislation, led to functional programming and pre-dimensional of the building to be designed. The result of this programming environments were individual records with information from environmental restrictions, space required for the development of activities, desirable flow and sustainability strategies, that can be considered as the first product of relevance of the professional master's degree. Finally we have outlined the basic design architecture of a Verification Service of Death SVO/RN (in portuguese), whose process of projecting defined as a guiding line of work four points: the use of bioclimatic architecture as the main feature projectual, the use of resources would provide minimal harm to the environment, the use of modulation and structure to the building as a form of rationalization and finally the search for solutions that ensure environmental and psychological comfort to users. Importantly to highlight that, besides owning a rare theme in literature that refers to architectural projects, the whole project was drawn up with foundations in projective criteria that contribute to environmental sustainability, with emphasis on thermal performance, energy efficiency and reuse of rainwater

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study the relationship between heterogeneous nucleate boiling surfaces and deposition of suspended metallic colloidal particles, popularly known as crud or corrosion products in process industries, on those heterogeneous sites is investigated. Various researchers have reported that hematite is a major constituent of crud which makes it the primary material of interest; however the models developed in this work are irrespective of material choice. Qualitative hypotheses on the deposition process under boiling as proposed by previous researchers have been tested, which fail to provide explanations for several physical mechanisms observed and analyzed. In this study a quantitative model of deposition rate has been developed on the basis of bubble dynamics and colloid-surface interaction potential. Boiling from a heating surface aids in aggregation of the metallic particulates viz. nano-particles, crud particulate, etc. suspended in a liquid, which helps in transporting them to heating surfaces. Consequently, clusters of particles deposit onto the heating surfaces due to various interactive forces, resulting in formation of porous or impervious layers. The deposit layer grows or recedes depending upon variations in interparticle and surface forces, fluid shear, fluid chemistry, etc. This deposit layer in turn affects the rate of bubble generation, formation of porous chimneys, critical heat flux (CHF) of surfaces, activation and deactivation of nucleation sites on the heating surfaces. Several problems are posed due to the effect of boiling on colloidal deposition, which range from research initiatives involving nano-fluids as a heat transfer medium to industrial applications such as light water nuclear reactors. In this study, it is attempted to integrate colloid and surface science with vapor bubble dynamics, boiling heat transfer and evaporation rate. Pool boiling experiments with dilute metallic colloids have been conducted to investigate several parameters impacting the system. The experimental data available in the literature is obtained by flow experiments, which do not help in correlating boiling mechanism with the deposition amount or structure. With the help of experimental evidences and analysis, previously proposed hypothesis for particle transport to the contact line due to hydrophobicity has been challenged. The experimental observations suggest that deposition occurs around the bubble surface contact line and extends underneath area of the bubble microlayer as well. During the evaporation the concentration gradient of a non-volatile species is created, which induces osmotic pressure. The osmotic pressure developed inside the microlayer draws more particles inside the microlayer region or towards contact line. The colloidal escape time is slower than the evaporation time, which leads to the aggregation of particles in the evaporating micro-layer. These aggregated particles deposit onto or are removed from the heating surface, depending upon their total interaction potential. Interaction potential has been computed with the help of surface charge and van der Waals potential for the materials in aqueous solutions. Based upon the interaction-force boundary layer thickness, which is governed by debye radius (or ionic concentration and pH), a simplified quantitative model for the attachment kinetics is proposed. This attachment kinetics model gives reasonable results in predicting attachment rate against data reported by previous researchers. The attachment kinetics study has been done for different pH levels and particle sizes for hematite particles. Quantification of colloidal transport under boiling scenarios is done with the help of overall average evaporation rates because generally waiting times for bubbles at the same position is much larger than growth times. In other words, from a larger measurable scale perspective, frequency of bubbles dictates the rate of collection of particles rather than evaporation rate during micro-layer evaporation of one bubble. The combination of attachment kinetics and colloidal transport kinetics has been used to make a consolidated model for prediction of the amount of deposition and is validated with the help of high fidelity experimental data. In an attempt to understand and explain boiling characteristics, high speed visualization of bubble dynamics from a single artificial large cavity and multiple naturally occurring cavities is conducted. A bubble growth and departure dynamics model is developed for artificial active sites and is validated with the experimental data. The variation of bubble departure diameter with wall temperature is analyzed with experimental results and shows coherence with earlier studies. However, deposit traces after boiling experiments show that bubble contact diameter is essential to predict bubble departure dynamics, which has been ignored previously by various researchers. The relationship between porosity of colloid deposits and bubbles under the influence of Jakob number, sub-cooling and particle size has been developed. This also can be further utilized in variational wettability of the surface. Designing porous surfaces can having vast range of applications varying from high wettability, such as high critical heat flux boilers, to low wettability, such as efficient condensers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä työssä tutkitaan uudelleenkäytettävän kuljetuspakkauksen liiketoimintamahdollisuutta. Esimerkkinä tutkimuksessa on todellinen tilanne Suomalaisen kalustealan yrityksen pakkausprosessista. Tavoitteena on laatia liiketoimintamalli, jota hyödyntämällä voidaan aloittaa liiketoimintakumppanuus pakkausten käytössä. Työssä tutkitaan pakkauksen ja pakkaamisen nykytilanne ja tutkitaan mahdollisia hyötyjä yleisesti, sekä mahdollisia liiketoimintamalleja pakkauksen käyttöönottoon. Nykytilassa pakkaaminen kertakäyttöpakkaukseen on kustannustehoton ja käytettävyydeltään ongelmallinen ratkaisu. Pakkauksen käyttöönotto mahdollistaa merkittävän parannuksen pakkausprosessiin. Työn tuloksena esitellään kolme liiketoiminta mallia. Yksi liiketoimintamalleista on perustellusti toimivin ja sitä ehdotetaan käyttöönotettavaksi uudelleenkäytettävän pakkauksen käytössä kalusteiden kuljettamiseen. Mallissa kuljetuspakkaus toimii ammattilaistyökaluna sisäisesti omistajansa, eli tuotteita valmistavan yrityksen, sekä kuljetusyrityksen kesken. Kapea-alaista ratkaisua perustellaan mm. selkeällä kulurakenteella ja kokemuksen puutteella konseptin ja tuotteen toimivuudesta. Malli mahdollistaa koealustan pakkauksen käytölle ja lisäkokemus pakkauksen todellisesta toimivuudesta voi muuttaa liiketoimintamallia jatkossa laajempialaiseksi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network intrusion detection sensors are usually built around low level models of network traffic. This means that their output is of a similarly low level and as a consequence, is difficult to analyze. Intrusion alert correlation is the task of automating some of this analysis by grouping related alerts together. Attack graphs provide an intuitive model for such analysis. Unfortunately alert flooding attacks can still cause a loss of service on sensors, and when performing attack graph correlation, there can be a large number of extraneous alerts included in the output graph. This obscures the fine structure of genuine attacks and makes them more difficult for human operators to discern. This paper explores modified correlation algorithms which attempt to minimize the impact of this attack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impact of end customer quality complaints with direct relationship with automotive components has presented negative trend at European level for the entire automotive industry. Thus, this research proposal is to concentrate efforts on the most important items of Pareto chart and understand the failure type and the mechanism involved, link and impact of the project and parameters on the process, ending it with the development of one of the company’s most desired tool, that hosted this project – European methodology of terminals defects classification, and listing real opportunities for improvement based on measurement and analysis of actual data. Through the development of terminals defects classification methodology, which is considered a valuable asset to the company, all the other companies of the YAZAKI’s group will be able to characterize terminals as brittle or ductile, in order to put in motion, more efficiently, all the other different existing internal procedures for the safeguarding of the components, improving manufacturing efficiency. Based on a brief observation, nothing can be said in absolute sense, concerning the failure causes. Base materials, project, handling during manufacture and storage, as well as the cold work performed by plastic deformation, all play an important role. However, it was expected that this failure has been due to a combination of factors, in detriment of the existence of a single cause. In order to acquire greater knowledge about this problem, unexplored by the company up to the date of commencement of this study, was conducted a thorough review of existing literature on the subject, real production sites were visited and, of course, the actual parts were tested in lab environment. To answer to many of the major issues raised throughout the investigation, were used extensively some theoretical concepts focused on the literature review, with a view to realizing the relationship existing between the different parameters concerned. Should here be stated that finding technical studies on copper and its alloys is really hard, not being given all the desirable information. This investigation has been performed as a YAZAKI Europe Limited Company project and as a Master Thesis for Instituto Superior de Engenharia do Porto, conducted during 9 months between 2012/2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent advent of new technologies has led to huge amounts of genomic data. With these data come new opportunities to understand biological cellular processes underlying hidden regulation mechanisms and to identify disease related biomarkers for informative diagnostics. However, extracting biological insights from the immense amounts of genomic data is a challenging task. Therefore, effective and efficient computational techniques are needed to analyze and interpret genomic data. In this thesis, novel computational methods are proposed to address such challenges: a Bayesian mixture model, an extended Bayesian mixture model, and an Eigen-brain approach. The Bayesian mixture framework involves integration of the Bayesian network and the Gaussian mixture model. Based on the proposed framework and its conjunction with K-means clustering and principal component analysis (PCA), biological insights are derived such as context specific/dependent relationships and nested structures within microarray where biological replicates are encapsulated. The Bayesian mixture framework is then extended to explore posterior distributions of network space by incorporating a Markov chain Monte Carlo (MCMC) model. The extended Bayesian mixture model summarizes the sampled network structures by extracting biologically meaningful features. Finally, an Eigen-brain approach is proposed to analyze in situ hybridization data for the identification of the cell-type specific genes, which can be useful for informative blood diagnostics. Computational results with region-based clustering reveals the critical evidence for the consistency with brain anatomical structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O trabalho desenvolvido centrou-se na preparação da acreditação NP EN ISO/IEC 17025 do Laboratório de Metrologia da empresa Frilabo para prestação de serviços na área das temperaturas, no ensaio a câmaras térmicas e na calibração de termómetros industriais. Considerando o âmbito do trabalho desenvolvido, são abordados nesta tese conceitos teóricos sobre temperatura e incertezas bem como considerações técnicas de medição da temperatura e cálculo de incertezas. São também referidas considerações sobre os diferentes tipos de câmaras térmicas e termómetros. O texto apresenta os documentos elaborados pelo autor sobre os procedimentos de ensaio a câmaras térmicas e respetivo procedimento de cálculo da incerteza. Também estão presentes neste texto documentos elaborados pelo autor sobre os procedimentos de calibração de termómetros industriais e respetivo procedimento de cálculo da incerteza. Relativamente aos ensaios a câmara térmicas e calibração de termómetros o autor elaborou os fluxogramas sobre a metodologia da medição da temperatura nos ensaios, a metodologia de medição da temperatura nas calibrações, e respetivos cálculos de incertezas. Nos diferentes anexos estão apresentados vários documentos tais como o modelo de folha de cálculo para tratamento de dados relativos ao ensaio, modelo de folha de cálculo para tratamento de dados relativo às calibrações, modelo de relatório de ensaio, modelo de certificado de calibração, folhas de cálculo para gestão de clientes/equipamentos e numeração automática de relatórios de ensaio e certificados de calibração que cumprem os requisitos de gestão do laboratório. Ainda em anexo constam todas as figuras relativas à monitorização da temperatura nas câmara térmicas como também as figuras da disposição dos termómetros no interior das câmaras térmicas. Todas as figuras que aparecem ao longo do documento que não estão referenciadas são da adaptação ou elaboração própria do autor. A decisão de alargar o âmbito da acreditação do Laboratório de Metrologia da Frilabo para calibração de termómetros, prendeu-se com o facto de que sendo acreditado como laboratório de ensaios na área das temperaturas, a realização da rastreabilidade dos padrões de medida internamente, permitiria uma gestão de recursos otimizada e rentabilizada. A metodologia da preparação de todo o processo de acreditação do Laboratório de Metrologia da Frilabo, foi desenvolvida pelo autor e está expressa ao longo do texto da tese incluindo dados relevantes para a concretização da referida acreditação nos dois âmbitos. A avaliação de todo o trabalho desenvolvido será efetuada pelo o organismo designado IPAC (Instituto Português de Acreditação) que confere a acreditação em Portugal. Este organismo irá auditar a empresa com base nos procedimentos desenvolvidos e nos resultados obtidos, sendo destes o mais importante o Balanço da Melhor Incerteza (BMI) da medição também conhecido por Melhor Capacidade de Medição (MCM), quer para o ensaio às câmaras térmicas, quer para a calibração dos termómetros, permitindo desta forma complementar os serviços prestados aos clientes fidelizados à Frilabo. As câmaras térmicas e os termómetros industriais são equipamentos amplamente utilizados em diversos segmentos industriais, engenharia, medicina, ensino e também nas instituições de investigação, sendo um dos objetivos respetivamente, a simulação de condições específicas controladas e a medição de temperatura. Para entidades acreditadas, como os laboratórios, torna-se primordial que as medições realizadas com e nestes tipos de equipamentos ostentem confiabilidade metrológica1, uma vez que, resultados das medições inadequados podem levar a conclusões equivocadas sobre os testes realizados. Os resultados obtidos nos ensaios a câmaras térmicas e nas calibrações de termómetros, são considerados bons e aceitáveis, uma vez que as melhores incertezas obtidas, podem ser comparadas, através de consulta pública do Anexo Técnico do IPAC, com as incertezas de outros laboratórios acreditados em Portugal. Numa abordagem mais experimental, pode dizer-se que no ensaio a câmaras térmicas a obtenção de incertezas mais baixas ou mais altas depende maioritariamente do comportamento, características e estado de conservação das câmaras, tornando relevante o processo de estabilização da temperatura no interior das mesmas. A maioria das fontes de incerteza na calibração dos termómetros são obtidas pelas características e especificações do fabricante dos equipamentos, que se traduzem por uma contribuição com o mesmo peso para o cálculo da incerteza expandida (a exatidão de fabricante, as incertezas herdadas de certificados de calibração, da estabilidade e da uniformidade do meio térmico onde se efetuam as calibrações). Na calibração dos termómetros as incertezas mais baixas obtêm-se para termómetros de resoluções mais baixas. Verificou-se que os termómetros com resolução de 1ºC não detetavam as variações do banho térmico. Nos termómetros com resoluções inferiores, o peso da contribuição da dispersão de leituras no cálculo da incerteza, pode variar consoante as características do termómetro. Por exemplo os termómetros com resolução de 0,1ºC, apresentaram o maior peso na contribuição da componente da dispersão de leituras. Pode concluir-se que a acreditação de um laboratório é um processo que não é de todo fácil. Podem salientar-se aspetos que podem comprometer a acreditação, como por exemplo a má seleção do ou dos técnicos e equipamentos (má formação do técnico, equipamento que não seja por exemplo adequado à gama, mal calibrado, etc…) que vão efetuar as medições. Se não for bem feita, vai comprometer todo o processo nos passos seguintes. Deve haver também o envolvimento do todos os intervenientes do laboratório, o gestor da qualidade, o responsável técnico e os técnicos, só assim é que é possível chegar à qualidade pretendida e à melhoria contínua da acreditação do laboratório. Outro aspeto importante na preparação de uma acreditação de um laboratório é a pesquisa de documentação necessária e adequada para poder tomar decisões corretas na elaboração dos procedimentos conducentes à referida. O laboratório tem de mostrar/comprovar através de registos a sua competência. Finalmente pode dizer-se que competência é a palavra chave de uma acreditação, pois ela manifesta-se nas pessoas, equipamentos, métodos, instalações e outros aspetos da instituição a que pertence o laboratório sob acreditação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivo: Investigar o impacto da saúde bucal em relação à qualidade de vida de adolescentes escolares, associando-o às condições sociodemográficas. Métodos: Estudo de campo transversal e quantitativo desenvolvido em 2012, no município de Sumé-PB, com 184 adolescentes na faixa etária de 15 a 19 anos. Para avaliar o impacto, aplicou-se o questionário Oral Health Impact Profile (OHIP-14) aos participantes, enquanto para a obtenção dos dados referentes às condições sociodemográficas, os pais ou responsáveis responderam a um segundo questionário. Utilizou-se o teste Qui-quadrado para associar o impacto da saúde bucal sobre a qualidade de vida e as variáveis sociodemográficas pesquisadas, sendo considerados significativos com p<0,05. Resultados: Em geral, o impacto foi considerado fraco em 167 pesquisados (90,8%). “Dor física” foi a dimensão na qualidade de vida mais afetada pelas questões bucais entre aquelas que resultaram em impacto médio (22,8%; n=42). Apenas as variáveis “Situação do imóvel” e “Acomodação” associaram-se ao impacto geral (p<0,05). Os reduzidos percentuais de impacto geral forte (1,1%; n=2) relacionaram-se aos adolescentes cujas mães só estudaram até o ensino fundamental, ou às famílias que vivem com um salário mínimo ou menos (1,1%). Conclusão: Observou-se que as condições de saúde bucal apresentaram um impacto negativo fraco na qualidade de vida dos adolescentes investigados. As análises das condições sociodemográficas dos indivíduos relacionadas ao impacto geral da qualidade de vida relacionada à saúde oral associaram-se as variáveis “Situação do imóvel” e “Acomodação”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

C-ficocianina (C-FC) é uma ficobiliproteína, de cor natural azul, com diversas aplicações na indústria alimentícia, farmacêutica e biomédica, dependendo do seu grau específico de pureza, que pode variar de 0,7 a 4,0, com respectivo aumento de seu valor comercial. Essa pureza é alcançada através de diversas técnicas de purificação, que podem ser aplicadas em diferentes sequências. Um destes processos de purificação de proteínas baseia-se na cromatografia de troca iônica, que utiliza trocadores que adsorvem as proteínas como resultado de interações iônicas entre a superfície da proteína e o trocador. Resinas e colunas de leito expandido podem ser utilizadas para aumentar a produtividade dessa técnica. É fundamental conhecer o perfil do processo de adsorção, para melhor aplicá-lo como ferramenta para o design e otimização de parâmetros operacionais. Outra tecnologia para o tratamento de biomoléculas é a ultrafiltração. Esta técnica é aplicável em larga escala, apresenta baixa complexidade de aplicação e pode ser realizada em condições brandas, minimizando o dano para o produto. Para aumentar a estabilidade da C-FC, e facilitar a sua aplicação, podem ser avaliadas técnicas recentes, não exploradas para este fim, como as nanofibras obtidas através do processo de electrospinning. Estas fibras possuem uma área superficial específica extremamente elevada devido a seu pequeno diâmetro. O objetivo deste trabalho foi avaliar parâmetros de adsorção e diferentes técnicas para purificação de C-ficocianina de Spirulina platensis e obter nanofibras poliméricas incorporadas de C-ficocianina. O trabalho foi dividido em quatro artigos. No primeiro artigo, foram avaliados os parâmetros e as isotermas de adsorção de C-ficocianina em resina de troca iônica para leito expandido Streamline® DEAE. Verificou-se que o maior coeficiente de partição foi obtido em pH 7,5, nas temperaturas de 15 e 25 °C. As isotermas de adsorção da Cficocianina foram bem representadas pelos modelos de Langmuir, de Freundlich e de Langmuir-Freundlich, sendo os valores estimados para Qm e Kd obtidos pela isoterma de Langmuir foram, respectivamente, 33,92 mg.mL-1 e 0,123 mg.mL-1, respectivamente. No segundo artigo foi avaliada a purificação de C-FC até grau alimentar, utilizando ultrafiltração (UF). Com a membrana de 50 kDa, identificou-se que somente a temperatura e a aplicação de diferentes ciclos de diafiltração (DF) causaram influência significativa sobre a purificação e recuperação da C-ficocianina. Foram então aplicados o aumento gradativo da quantidade de ciclos, e a diafiltração previamente à ultrafiltração (DF/UF), onde obteve-se um extrato de Cficocianina com pureza de 0,95. No terceiro artigo foram propostos processos de purificação, envolvendo a utilização das diferentes técnicas para obtenção de C-FC com diferentes purezas. Determinou-se que a partir de cromatografia de troca iônica em leito fixo seguido de DF/UF, obtém-se C-FC para uso em cosméticos e a partir de precipitação com sulfato de amônio, e DF/UF obtém-se C-FC para uso em biomarcadores. Com uma sequência de precipitação com sulfato de amônio, DF/UF e cromatografia de troca iônica em leito fixo chega-se a C-FC de grau analítico. No último artigo, C-FC foi incorporada a nanofibras de óxido de polietileno (PEO) através de processo de electrospinning. Foram determinadas a condutividade da solução de C-FC/PEO, a estrutura e comportamento termogravimétrico das nanofibras formadas. Soluções de polímeros com concentração de 6 e 8% proporcionaram a formação de nanofibras com diâmetro médio inferior a 800 nm, homogêneas, sem a presença de gotas. A análise termogravimétrica identificou aumento na resistência térmica da C-FC incorporada nas fibras.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A radar scatterometer operates by transmitting a pulse of microwave energy toward the ocean's surface and measuring the normalized (per-unit-surface) radar backscatter coefficient (σ°). The primary application of scatterometry is the measurement of near-surface ocean winds. By combining σ° measurements from different azimuth angles, the 10 m vector wind can be determined through a Geophysical Model Function (GMF), which relates wind and backscatter. This paper proposes a mission concept for the measurement of both oceanic winds and surface currents, which makes full use of earlier C-band radar remote sensing experience. For the determination of ocean currents, in particular, the novel idea of using two chirps of opposite slope is introduced. The fundamental processing steps required to retrieve surface currents are given together with their associated accuracies. A detailed description of the mission proposal and comparisons between real and retrieved surface currents are presented. The proposed ocean Doppler scatterometer can be used to generate global surface ocean current maps with accuracies better than 0.2 m/s at a spatial resolution better than 25 km (i.e., 12.5 km spatial sampling) on a daily basis. These maps will allow gaining some insights on the upper ocean mesoscale dynamics. The work lies at a frontier, given that the present inability to measure ocean currents from space in a consistent and synoptic manner represents one of the greatest weaknesses in ocean remote sensing.