937 resultados para ISE and ITSE optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new, health supporting food of high quality and the optimization of food technological processes today require the application of statistical methods of experimental design. The principles and steps of statistical planning and evaluation of experiments will be explained. By example of the development of a gluten-free rusk (zwieback), which is enriched by roughage compounds the application of a simplex-centroid mixture design will be shown. The results will be illustrated by different graphics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA. © 2010 Taylor & Francis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characterizing engineered human lung tissue is an important step in developing a functional tissue replacement for lung tissue repair and in vitro analysis. Small tissue constructs were grown by seeding IMR-90 fetal lung fibroblasts and adult microvascular endothelial cells onto a Polyglycolic acid (PGA) polymer template. Introducing the constructs to dynamic culture conditions inside a bioreactor facilitated three-dimensional growth seen in scanning electron microscopy images (SEM). Characterization of the resultant tissue samples was done using SEM imagery, tensile tests, and biochemical assays to quantify extra-cellular matrix (ECM) composition. Tensile tests of the engineered samples indicated an increase in the mechanical properties when compared with blank constructs. Elastin and collagen content was found to average 3.19% and 15.49% respectively in relation to total mass of the tissue samples. The presence of elastin and collagen within the constructs most likely explains the mechanical differences that we noted. These findings suggest that the necessary ECM can be established in engineered tissue constructs and that optimization of this procedure has the capacity to generate the load bearing elements required for construction of a functional lung tissue equivalent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work consists of the conception, developing and implementation of a Computational Routine CAE which has algorithms suitable for the tension and deformation analysis. The system was integrated to an academic software named as OrtoCAD. The expansion algorithms for the interface CAE genereated by this work were developed in FORTRAN with the objective of increase the applications of two former works of PPGEM-UFRN: project and fabrication of a Electromechanincal reader and Software OrtoCAD. The software OrtoCAD is an interface that, orinally, includes the visualization of prothetic cartridges from the data obtained from a electromechanical reader (LEM). The LEM is basically a tridimensional scanner based on reverse engineering. First, the geometry of a residual limb (i.e., the remaining part of an amputee leg wherein the prothesis is fixed) is obtained from the data generated by LEM by the use of Reverse Engineering concepts. The proposed core FEA uses the Shell's Theory where a 2D surface is generated from a 3D piece form OrtoCAD. The shell's analysis program uses the well-known Finite Elements Method to describe the geometry and the behavior of the material. The program is based square-based Lagragean elements of nine nodes and displacement field of higher order to a better description of the tension field in the thickness. As a result, the new FEA routine provide excellent advantages by providing new features to OrtoCAD: independency of high cost commercial softwares; new routines were added to the OrtoCAD library for more realistic problems by using criteria of fault engineering of composites materials; enhanced the performance of the FEA analysis by using a specific grid element for a higher number of nodes; and finally, it has the advantage of open-source project and offering customized intrinsic versatility and wide possibilities of editing and/or optimization that may be necessary in the future

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Product quality planning is a fundamental part of quality assurance in manufacturing. It is composed of the distribution of quality aims over each phase in product development and the deployment of quality operations and resources to accomplish these aims. This paper proposes a quality planning methodology based on risk assessment and the planning tasks of product development are translated into evaluation of risk priorities. Firstly, a comprehensive model for quality planning is developed to address the deficiencies of traditional quality function deployment (QFD) based quality planning. Secondly, a novel failure knowledge base (FKB) based method is discussed. Then a mathematical method and algorithm of risk assessment is presented for target decomposition, measure selection, and sequence optimization. Finally, the proposed methodology has been implemented in a web based prototype software system, QQ-Planning, to solve the problem of quality planning regarding the distribution of quality targets and the deployment of quality resources, in such a way that the product requirements are satisfied and the enterprise resources are highly utilized. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While it is well known that exposure to radiation can result in cataract formation, questions still remain about the presence of a dose threshold in radiation cataractogenesis. Since the exposure history from diagnostic CT exams is well documented in a patient’s medical record, the population of patients chronically exposed to radiation from head CT exams may be an interesting area to explore for further research in this area. However, there are some challenges in estimating lens dose from head CT exams. An accurate lens dosimetry model would have to account for differences in imaging protocols, differences in head size, and the use of any dose reduction methods.

The overall objective of this dissertation was to develop a comprehensive method to estimate radiation dose to the lens of the eye for patients receiving CT scans of the head. This research is comprised of a physics component, in which a lens dosimetry model was derived for head CT, and a clinical component, which involved the application of that dosimetry model to patient data.

The physics component includes experiments related to the physical measurement of the radiation dose to the lens by various types of dosimeters placed within anthropomorphic phantoms. These dosimeters include high-sensitivity MOSFETs, TLDs, and radiochromic film. The six anthropomorphic phantoms used in these experiments range in age from newborn to adult.

First, the lens dose from five clinically relevant head CT protocols was measured in the anthropomorphic phantoms with MOSFET dosimeters on two state-of-the-art CT scanners. The volume CT dose index (CTDIvol), which is a standard CT output index, was compared to the measured lens doses. Phantom age-specific CTDIvol-to-lens dose conversion factors were derived using linear regression analysis. Since head size can vary among individuals of the same age, a method was derived to estimate the CTDIvol-to-lens dose conversion factor using the effective head diameter. These conversion factors were derived for each scanner individually, but also were derived with the combined data from the two scanners as a means to investigate the feasibility of a scanner-independent method. Using the scanner-independent method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter, most of the fitted lens dose values fell within 10-15% of the measured values from the phantom study, suggesting that this is a fairly accurate method of estimating lens dose from the CTDIvol with knowledge of the patient’s head size.

Second, the dose reduction potential of organ-based tube current modulation (OB-TCM) and its effect on the CTDIvol-to-lens dose estimation method was investigated. The lens dose was measured with MOSFET dosimeters placed within the same six anthropomorphic phantoms. The phantoms were scanned with the five clinical head CT protocols with OB-TCM enabled on the one scanner model at our institution equipped with this software. The average decrease in lens dose with OB-TCM ranged from 13.5 to 26.0%. Using the size-specific method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter for protocols with OB-TCM, the majority of the fitted lens dose values fell within 15-18% of the measured values from the phantom study.

Third, the effect of gantry angulation on lens dose was investigated by measuring the lens dose with TLDs placed within the six anthropomorphic phantoms. The 2-dimensional spatial distribution of dose within the areas of the phantoms containing the orbit was measured with radiochromic film. A method was derived to determine the CTDIvol-to-lens dose conversion factor based upon distance from the primary beam scan range to the lens. The average dose to the lens region decreased substantially for almost all the phantoms (ranging from 67 to 92%) when the orbit was exposed to scattered radiation compared to the primary beam. The effectiveness of this method to reduce lens dose is highly dependent upon the shape and size of the head, which influences whether or not the angled scan range coverage can include the entire brain volume and still avoid the orbit.

The clinical component of this dissertation involved performing retrospective patient studies in the pediatric and adult populations, and reconstructing the lens doses from head CT examinations with the methods derived in the physics component. The cumulative lens doses in the patients selected for the retrospective study ranged from 40 to 1020 mGy in the pediatric group, and 53 to 2900 mGy in the adult group.

This dissertation represents a comprehensive approach to lens of the eye dosimetry in CT imaging of the head. The collected data and derived formulas can be used in future studies on radiation-induced cataracts from repeated CT imaging of the head. Additionally, it can be used in the areas of personalized patient dose management, and protocol optimization and clinician training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mobile networks market (focus of this work) strategy is based on the consolidation of the installed structure and the optimization of the already existent resources. The increasingly competition and aggression of this market requires, to the mobile operators, a continuous maintenance and update of the networks in order to obtain the minimum number of fails and provide the best experience for its subscribers. In this context, this dissertation presents a study aiming to assist the mobile operators improving future network modifications. In overview, this dissertation compares several forecasting methods (mostly based on time series analysis) capable of support mobile operators with their network planning. Moreover, it presents several network indicators about the more common bottlenecks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A atuação eficaz dos trabalhadores da saúde no atendimento imediato da parada cardiorrespiratória possibilita o efetivo processo de implementação da hipotermia terapêutica, reduzindo possíveis danos cerebrais e proporcionando um melhor prognóstico para o paciente. O presente estudo objetivou conhecer o processo de implementação da hipotermia terapêutica pós-parada cardiorrespiratória em hospitais do extremo sul do Brasil. Tratou-se de uma pesquisa de abordagem qualitativa, do tipo descritiva. O cenário do estudo foram duas Unidades de Terapia Intensiva de dois hospitais onde a hipotermia terapêutica pós-parada cardiorrespiratória é realizada. Os sujeitos do estudo foram médicos, enfermeiros e técnicos de enfermagem atuantes nas referidas unidades. A coleta de dados foi composta por dois momentos. Primeiramente, foi desenvolvida uma pesquisa retrospectiva nos prontuários dos pacientes e, posteriormente, foram aplicadas entrevistas semiestruturadas por meio de roteiro de entrevista com os profissionais citados, as quais foram gravadas com aparelho digital. A coleta de dados ocorreu durante o mês de outubro de 2014. Para interpretação dos dados, foi utilizada a análise textual discursiva, construindo-se três categorias. Na primeira categoria, “processo de implementação da hipotermia terapêutica”, constatou-se que o hospital com uma implementação sistematizada e organizada utiliza um protocolo escrito e, em relação às fases de aplicação da hipotermia terapêutica, ambas as instituições utilizam os métodos tradicionais de indução, manutenção e reaquecimento. A segunda categoria, “facilidades e dificuldades vivenciadas pela equipe de saúde durante a aplicação da hipotermia terapêutica”, identifica a estrutura física, harmonia da equipe, equipamentos para a monitorização constante das condições hemodinâmicas dos pacientes e a otimização do tempo de trabalho como facilitadores. No que tange às dificuldades, constatou-se a aquisição de materiais, como o gelo e o BIS; disponibilidade de um único termômetro esofágico; inexistência de EPI’s; conhecimento insuficiente e inaptidão técnica; ausência de educação permanente e dimensionamento inadequado dos profissionais de enfermagem. Na terceira categoria, “efeitos adversos e complicações encontradas pela equipe de saúde durante a aplicação da hipotermia terapêutica e cuidados de enfermagem realizados”, verificou-se, como efeitos adversos, a ocorrência de tremores, bradicardia e hipotensão e de complicações como hipotermia excessiva e queimaduras de pele. Os cuidados de enfermagem direcionam-se aos cuidados com a pele e extremidades, uso do gelo, sedação, higiene, conforto e preparo de material para monitorização. Concluiu-se que a hipotermia terapêutica é possível de ser aplicada, na realidade das instituições pesquisadas, de maneira segura, eficaz e de baixo custo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As lipases e os biossurfactantes são compostos produzidos por microrganismos através de fermentações em estado sólido (FES) ou sumberso (FSm), os quais são aplicáveis nas indústrias alimentícia e farmacêutica, na bioenergia e na biorremediação, entre outras. O objetivo geral deste trabalho foi otimizar a produção de lipases através de fermentação em estado sólido e fermentação submersa. Os fungos foram selecionados quanto à habilidade de produção de lipases através de FES e FSm e aqueles que apresentaram as maiores atividades lipolíticas foram utilizados na seleção de variáveis significativas e na otimização da produção de lipases nos dois modos de cultivo. Foram empregadas técnicas seqüenciais de planejamento experimental, incluindo planejamentos fracionários, completos e a metodologia de superfície de resposta para a otimização da produção de lipases. As variáveis estudadas na FES foram o pH, o tipo de farelo como fonte de carbono, a fonte de nitrogênio, o indutor, a concentração da fonte de nitrogênio, a concentração do indutor e a cepa do fungo. Na FSm, além das variáveis estudadas na FES, estudaram-se as variáveis concentração inicial de inóculo e agitação. As enzimas produzidas foram caracterizadas quanto à temperatura e pH ótimos e quanto à estabilidade a temperatura e pH. Nas condições otimizadas de produção de lipases, foi avaliada a correlação entre a produção de lipases e bioemulsificantes. Inicialmente foram isolados 28 fungos. Os fungos Aspergillus O- 4 e Aspergillus E-6 foram selecionados como bons produtores de lipases no processo de fermentação em estado sólido e os fungos Penicillium E-3, Trichoderma E-19 e Aspergillus O-8 como bons produtores de lipases através da fermentação submersa. As condições otimizadas para a produção de lipases através de fermentação em estado sólido foram obtidas utilizando-se o fungo Aspergillus O-4, farelo de soja, 2% de nitrato de sódio, 2% de azeite de oliva e pHs inferiores a 5, obtendo-se atividades lipolíticas máximas de 57 U. As condições otimizadas para a produção de lipases na fermentação submersa foram obtidas utilizando-se o fungo Aspergillus O-8, farelo de trigo, 4,5% de extrato de levedura, 2% de óleo de soja e pH 7,15. A máxima atividade obtida durante a etapa de otimização foi 6 U. As lipases obtidas por FES apresentaram atividades máximas a 35ºC e pH 6,0, enquanto que as obtidas por FSm apresentaram ótimos a 37ºC e pH 7,2. A estabilidade térmica das lipases produzidas via FSm foi superior a das lipases obtidas via FES, com atividades residuais de 72% e 26,8% após 1h de exposição a 90ºC e 60ºC, respectivamente. As lipases obtidas via FES foram mais estáveis em pH´s alcalinos, com atividades residuais superiores a 60% após 24 h de exposição, enquanto as lipases produzidas via FSm foram mais estáveis em pH´s ácidos, com 80% de atividade residual na faixa de pH entre 3,5 e 6,5. Na fermentação submersa a correlação entre a produção de lipases e a atividade emulsificante óleo em água (O/A) e água em óleo (A/O) dos extratos foi 95,4% e 86,8%, respectivamente, obtendo-se atividades emulsificantes máximas O/A e A/O de 2,95 UE e 42,7 UE. Embora a maior produção de lipases tenha sido obtida na fermentação em estado sólido, não houve produção concomitante de biossurfactantes. Os extratos da fermentação submersa apresentaram redução da tensão superficial de 50 mN m -1 para 28 mN m -1 e atividade antimicrobiana frente ao microrganismo S. aureus ATCC 25923, com potenciais antimicrobianos de 36 a 43% nos três primeiros dias de fermentação. A fermentação submersa foi a técnica que apresentou os melhores resultados de otimização da produção de lipases, bem como de produção simultânea de biossurfactantes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A produção de proteínas através de microrganismos tornou-se uma técnica muito importante na obtenção de compostos de interesse da indústria farmacêutica e alimentícia. Extratos brutos nos quais as proteínas são obtidas são geralmente complexos, contendo sólidos e células em suspensão. Usualmente, para uso industrial destes compostos, é necessário obtê-los puros, para garantir a sua atuação sem interferência. Um método que vem recebendo destaque especialmente nos últimos 10 anos é o uso da cromatografia de troca iônica em leito expandido, que combina em uma única etapa os passos de clarificação, concentração e purificação da molécula alvo, reduzindo assim o tempo de operação e também os custos com equipamentos para realização de cada etapa em separado. Combinado a este fato, a última década também é marcada por trabalhos que tratam da modelagem matemática do processo de adsorção de proteínas em resinas. Está técnica, além de fornecer informações importantes sobre o processo de adsorção, também é de grande valia na otimização da etapa de adsorção, uma vez que permite que simulações sejam feitas, sem a necessidade de gasto de tempo e material com experimentos em bancada, especialmente se é desejado uma ampliação de escala. Dessa forma, o objetivo desta tese foi realizar a modelagem e simulação do processo de adsorção de bioprodutos em um caldo bruto na presença de células, usando inulinase e C-ficocianina como objeto de estudo e purificar C-ficocianina utilizando resina de troca iônica em leito expandido. A presente tese foi então dividida em quatro artigos. O primeiro artigo teve como objeto de estudo a enzima inulinase, e a otimização da etapa de adsorção desta enzima em resina de troca iônica Streamline SP, em leito expandido, foi feita através da modelagem matemática e simulação das curvas de ruptura em três diferentes graus de expansão (GE). As máximas eficiências foram observadas quando utilizadas maiores concentrações de inulinase (120 a 170 U/mL), e altura de leito entre 20 e 30 cm. O grau de expansão de 3,0 vezes foi considerado o melhor, uma vez que a produtividade foi consideravelmente superior. O segundo artigo apresenta o estudo das condições de adsorção de C-ficocianina em resina de troca iônica, onde foi verificado o efeito do pH e temperatura na adsorção e após construída a isoterma de adsorção. A isoterma de adsorção da C-ficocianina em resina Streamline Q XL feita em pH 7,5 e a 25°C (ambiente), apresentou um bom ajuste ao modelo de Langmuir (R=0,98) e os valores qm (capacidade máxima de adsorção) e Kd (constante de equilíbrio) estimados pela equação linearizada da isoterma, foram de 26,7 mg/mL e 0,067mg/mL. O terceiro artigo aborda a modelagem do processo de adsorção de extrato não clarificado de C-ficocianina em resina de troca iônica Streamline Q XL em coluna de leito expandido. Três curvas de ruptura foram feitas em diferentes graus de expansão (2,0, 2,5 e 3,0). A condição de adsorção de extrato bruto não clarificado de C-ficocianina que se mostrou mais vantajosa, por apresentar maior capacidade de adsorção, é quando se alimenta o extrato até atingir 10% de saturação da resina, em grau de expansão 2,0, com uma altura inicial de leito de 30 cm. O último artigo originado nesta tese foi sobre a purificação de C-ficocianina através da cromatografia de troca iônica em leito expandido. Uma vez que a adsorção já havia sido estudada no artigo 2, o artigo 4 enfoca na otimização das condições de eluição, visando obter um produto com máxima pureza e recuperação. A pureza é dada pela razão entre a absorbância a 620 nm pela absorbância a 280 nm, e dizse que quando C-ficocianina apresenta pureza superior a 0,7 ela pode ser usada em como corante em alimentos. A avaliação das curvas de contorno indicou que a faixa de trabalho deve ser em pH ao redor de 6,5 e volumes de eluição próximos a 150 mL. Tais condições combinadas a uma etapa de pré-eluição com 0,1M de NaCl, permitiu obter C-ficocianina com pureza de 2,9, concentração 3 mg/mL, e recuperação ao redor de 70%.