893 resultados para Cell retention systems
Resumo:
The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings.
Resumo:
The role of T-cells within the immune system is to confirm and assess anomalous situations and then either respond to or tolerate the source of the effect. To illustrate how these mechanisms can be harnessed to solve real-world problems, we present the blueprint of a T-cell inspired algorithm for computer security worm detection. We show how the three central T-cell processes, namely T-cell maturation, differentiation and proliferation, naturally map into this domain and further illustrate how such an algorithm fits into a complete immune inspired computer security system and framework.
Resumo:
Cellular models are important tools in various research areas related to colorectal biology and associated diseases. Herein, we review the most widely used cell lines and the different techniques to grow them, either as cell monolayer, polarized two-dimensional epithelia on membrane filters, or as three-dimensional spheres in scaffoldfree or matrix-supported culture conditions. Moreover, recent developments, such as gut-on-chip devices or the ex vivo growth of biopsy-derived organoids, are also discussed. We provide an overview on the potential applications but also on the limitations for each of these techniques, while evaluating their contribution to provide more reliable cellular models for research, diagnostic testing, or pharmacological validation related to colon physiology and pathophysiology.
Resumo:
The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.
Resumo:
The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.
Resumo:
La diminution des doses administrées ou même la cessation complète d'un traitement chimiothérapeutique est souvent la conséquence de la réduction du nombre de neutrophiles, qui sont les globules blancs les plus fréquents dans le sang. Cette réduction dans le nombre absolu des neutrophiles, aussi connue sous le nom de myélosuppression, est précipitée par les effets létaux non spécifiques des médicaments anti-cancéreux, qui, parallèlement à leur effet thérapeutique, produisent aussi des effets toxiques sur les cellules saines. Dans le but d'atténuer cet impact myélosuppresseur, on administre aux patients un facteur de stimulation des colonies de granulocytes recombinant humain (rhG-CSF), une forme exogène du G-CSF, l'hormone responsable de la stimulation de la production des neutrophiles et de leurs libération dans la circulation sanguine. Bien que les bienfaits d'un traitement prophylactique avec le G-CSF pendant la chimiothérapie soient bien établis, les protocoles d'administration demeurent mal définis et sont fréquemment déterminés ad libitum par les cliniciens. Avec l'optique d'améliorer le dosage thérapeutique et rationaliser l'utilisation du rhG-CSF pendant le traitement chimiothérapeutique, nous avons développé un modèle physiologique du processus de granulopoïèse, qui incorpore les connaissances actuelles de pointe relatives à la production des neutrophiles des cellules souches hématopoïétiques dans la moelle osseuse. À ce modèle physiologique, nous avons intégré des modèles pharmacocinétiques/pharmacodynamiques (PK/PD) de deux médicaments: le PM00104 (Zalypsis®), un médicament anti-cancéreux, et le rhG-CSF (filgrastim). En se servant des principes fondamentaux sous-jacents à la physiologie, nous avons estimé les paramètres de manière exhaustive sans devoir recourir à l'ajustement des données, ce qui nous a permis de prédire des données cliniques provenant de 172 patients soumis au protocol CHOP14 (6 cycles de chimiothérapie avec une période de 14 jours où l'administration du rhG-CSF se fait du jour 4 au jour 13 post-chimiothérapie). En utilisant ce modèle physio-PK/PD, nous avons démontré que le nombre d'administrations du rhG-CSF pourrait être réduit de dix (pratique actuelle) à quatre ou même trois administrations, à condition de retarder le début du traitement prophylactique par le rhG-CSF. Dans un souci d'applicabilité clinique de notre approche de modélisation, nous avons investigué l'impact de la variabilité PK présente dans une population de patients, sur les prédictions du modèle, en intégrant des modèles PK de population (Pop-PK) des deux médicaments. En considérant des cohortes de 500 patients in silico pour chacun des cinq scénarios de variabilité plausibles et en utilisant trois marqueurs cliniques, soient le temps au nadir des neutrophiles, la valeur du nadir, ainsi que l'aire sous la courbe concentration-effet, nous avons établi qu'il n'y avait aucune différence significative dans les prédictions du modèle entre le patient-type et la population. Ceci démontre la robustesse de l'approche que nous avons développée et qui s'apparente à une approche de pharmacologie quantitative des systèmes (QSP). Motivés par l'utilisation du rhG-CSF dans le traitement d'autres maladies, comme des pathologies périodiques telles que la neutropénie cyclique, nous avons ensuite soumis l'étude du modèle au contexte des maladies dynamiques. En mettant en évidence la non validité du paradigme de la rétroaction des cytokines pour l'administration exogène des mimétiques du G-CSF, nous avons développé un modèle physiologique PK/PD novateur comprenant les concentrations libres et liées du G-CSF. Ce nouveau modèle PK a aussi nécessité des changements dans le modèle PD puisqu’il nous a permis de retracer les concentrations du G-CSF lié aux neutrophiles. Nous avons démontré que l'hypothèse sous-jacente de l'équilibre entre la concentration libre et liée, selon la loi d'action de masse, n'est plus valide pour le G-CSF aux concentrations endogènes et mènerait en fait à la surestimation de la clairance rénale du médicament. En procédant ainsi, nous avons réussi à reproduire des données cliniques obtenues dans diverses conditions (l'administration exogène du G-CSF, l'administration du PM00104, CHOP14). Nous avons aussi fourni une explication logique des mécanismes responsables de la réponse physiologique aux deux médicaments. Finalement, afin de mettre en exergue l’approche intégrative en pharmacologie adoptée dans cette thèse, nous avons démontré sa valeur inestimable pour la mise en lumière et la reconstruction des systèmes vivants complexes, en faisant le parallèle avec d’autres disciplines scientifiques telles que la paléontologie et la forensique, où une approche semblable a largement fait ses preuves. Nous avons aussi discuté du potentiel de la pharmacologie quantitative des systèmes appliquées au développement du médicament et à la médecine translationnelle, en se servant du modèle physio-PK/PD que nous avons mis au point.
Resumo:
As proteases constituem 60-65% do mercado global das enzimas industriais e são utilizadas na indústria de alimentos no processo de amaciamento de carne, na síntese de peptídeos, preparo de fórmulas infantis, panificação, cervejarias, produtos farmacêuticos, diagnósticos médicos, como aditivos na indústria de detergentes e na indústria têxtil no processo de depilação e transformação do couro. Proteases específicas produzidas por micro-organismos queratinolíticos são chamadas de queratinases e distinguem-se de outras proteases pela maior capacidade de degradação de substratos compactos e insolúveis como a queratina. Atualmente, processos que apontem o uso total das matérias-primas e que não resultem em impactos negativos ao meio ambiente tem ganhado destaque. Dentro desta temática, destacam-se a reutilização da farinha de penas residual durante o cultivo do Bacillus sp. P45 para produção de proteases e a biomassa residual de levedura, ambas com elevados teores de proteínas, podendo ser utilizadas no cultivo do Bacillus sp. P45 para obtenção de proteases. O objetivo deste trabalho foi obter a enzima queratinase purificada em grandes quantidades, sua caracterização, bem como a sua aplicação em processos de coagulação enzimática do leite para o desenvolvimento de um queijo cremoso enriquecido com farinha de chia e quinoa. Além disso, aplicar diferentes coprodutos para produção de enzimas proteolíticas e queratinolíticas. A presente tese foi dividida em quatro artigos: no primeiro foi realizado a obtenção da queratinase purificada em maiores quantidades e a determinação dos parâmetros de estabilidade térmica e a influência de componentes químicos na atividade enzimática. A obtenção da enzima em maiores quantidades alcançou fatores de purificação de 2,6, 6,7 e 4,0 vezes, paras 1º SAB, 2º SAB e diafiltração, respectivamente. A recuperação enzimática alcançou valores de 75,3% para o 1º SAB, 75,1% no 2º sistema e 84,3% na diafiltração. A temperatura de 55ºC e o pH 7,5 foram determinados como ótimos para atividade da enzima queratinase. O valor da energia de desativação (Ed) médio foi de 118,0 kJ/mol e os valores de z e D variaram de 13,6 a 18,8ºC, e 6,9 a 237,3 min, respectivamente. Além disso a adição de sais (CaCl2, CaO, C8H5KO4 e MgSO4) elevou a atividade da enzima na presença destes compostos. O segundo artigo apresenta a aplicação da queratinase como coagulante de leite bovino e sua aplicação na obtenção de queijo cremoso enriquecido com chia e quinoa. A enzima mostrou atividade de coagulação semelhante ao coagulante comercial, na concentração de 30mg/mL. A enzima purificada foi empregada de forma eficiente na fabricação do queijo cremoso, que apresentou valores de pH de 5,3 e acidez de 0,06 a 0,1 mol/L, com elevação durante os 25 dias de armazenamento. O terceiro artigo apresenta o perfil do queijo cremoso enriquecido com farinha de chia e quinoa, o qual apresentou alto índice de retenção de água (>99,0%) e baixos valores de sinérese (<0,72%). Elevados teores de fibras foi verificado (3,0 a 5,0%), sugerindo seu consumo como fonte de fibras. As análises microbiológicas foram de acordo com a legislação vigente. Na análise sensorial foi verificado altos valores de suavidade ao paladar e verificado maiores valores de consistência e untabilidade nas amostras com maiores concentrações de nata e quinoa. O quarto artigo traz a extração de β-galactosidase por ultrassom e o uso da biomassa residual da levedura, bem como o uso de farinha de penas residuais como substrato para obtenção de proteases. O ultrassom foi eficiente para ruptura celular e extração de β-galactosidase, apresentando alta atividade (35,0 U/mL) e rendimento (876,0 U/g de biomassa). A maior atividade proteolítica (1300 U/mL em 32 h) e queratinolítica (89,2 U/mL) verificadas ocorreram utilizando-se a biomassa e a farinha de penas residuais, respectivamente. Maior produtividade proteolítica (40,8 U/mL/h) foi verificado no meio utilizando biomassa residual como substrato. Já a maior produtividade queratinolítica (2,8 U/mL/h) foi alcançada utilizando farinha de penas reutilizada.
Resumo:
Background: Non-small cell lung cancer (NSCLC) imposes a substantial burden on patients, health care systems and society due to increasing incidence and poor survival rates. In recent years, advances in the treatment of metastatic NSCLC have resulted from the introduction of targeted therapies. However, the application of these new agents increases treatment costs considerably. The objective of this article is to review the economic evidence of targeted therapies in metastatic NSCLC. Methods: A systematic literature review was conducted to identify cost-effectiveness (CE) as well as cost-utility studies. Medline, Embase, SciSearch, Cochrane, and 9 other databases were searched from 2000 through April 2013 (including update) for full-text publications. The quality of the studies was assessed via the validated Quality of Health Economic Studies (QHES) instrument. Results: Nineteen studies (including update) involving the MoAb bevacizumab and the Tyrosine-kinase inhibitors erlotinib and gefitinib met all inclusion criteria. The majority of studies analyzed the CE of first-line maintenance and second-line treatment with erlotinib. Five studies dealt with bevacizumab in first-line regimes. Gefitinib and pharmacogenomic profiling were each covered by only two studies. Furthermore, the available evidence was of only fair quality. Conclusion: First-line maintenance treatment with erlotinib compared to Best Supportive Care (BSC) can be considered cost-effective. In comparison to docetaxel, erlotinib is likely to be cost-effective in subsequent treatment regimens as well. The insights for bevacizumab are miscellaneous. There are findings that gefitinib is cost-effective in first- and second-line treatment, however, based on only two studies. The role of pharmacogenomic testing needs to be evaluated. Therefore, future research should improve the available evidence and consider pharmacogenomic profiling as specified by the European Medicines Agency. Upcoming agents like crizotinib and afatinib need to be analyzed as well. © Lange et al.
Resumo:
In the context of this work we evaluated a multisensory, noninvasive prototype platform for shake flask cultivations by monitoring three basic parameters (pH, pO2 and biomass). The focus lies on the evaluation of the biomass sensor based on backward light scattering. The application spectrum was expanded to four new organisms in addition to E. coli K12 and S. cerevisiae [1]. It could be shown that the sensor is appropriate for a wide range of standard microorganisms, e.g., L. zeae, K. pastoris, A. niger and CHO-K1. The biomass sensor signal could successfully be correlated and calibrated with well-known measurement methods like OD600, cell dry weight (CDW) and cell concentration. Logarithmic and Bleasdale-Nelder derived functions were adequate for data fitting. Measurements at low cell concentrations proved to be critical in terms of a high signal to noise ratio, but the integration of a custom made light shade in the shake flask improved these measurements significantly. This sensor based measurement method has a high potential to initiate a new generation of online bioprocess monitoring. Metabolic studies will particularly benefit from the multisensory data acquisition. The sensor is already used in labscale experiments for shake flask cultivations.
Resumo:
The goal of this thesis is to gain more in-depth understanding of employer branding and offer suggestions on how this knowledge could be utilized in the case company. More in detail, the purpose of this research is to provide tools for improving Lindström’s organizational attractiveness and boosting the recruitment and retention of the segment of high-performing sales professionals. A strategy for reaching this particular segment has not been previously drawn and HR-managers believe strongly that it would be very beneficial for the company’s development and growth. The topic of this research is very current for Lindström, but also contributes on general level as companies are competing against each other in attracting, recruiting and retention of skilled workforce in the times of labor shortage. The research is conducted with qualitative methods and the data collection includes primary data through interviews as well as secondary data in the form of analysis on previous research, websites, recruitment material and discussions with Lindström’s HR department. This research provides a good basis for broader examination on the topic and presents development suggestions for the identified challenges. Based on the key findings Lindström’s HR department was advised to increase firm’s visibility, broaden recruitment channels, provide more hands-on knowledge about the sales positions and investigate their possibilities of developing sales reward systems.
Resumo:
The Dendritic Cell Algorithm is an immune-inspired algorithm originally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to analyse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.
Resumo:
As an immune-inspired algorithm, the Dendritic Cell Algorithm (DCA), produces promising performance in the field of anomaly detection. This paper presents the application of the DCA to a standard data set, the KDD 99 data set. The results of different implementation versions of the DCA, including antigen multiplier and moving time windows, are reported. The real-valued Negative Selection Algorithm (NSA) using constant-sized detectors and the C4.5 decision tree algorithm are used, to conduct a baseline comparison. The results suggest that the DCA is applicable to KDD 99 data set, and the antigen multiplier and moving time windows have the same effect on the DCA for this particular data set. The real-valued NSA with contant-sized detectors is not applicable to the data set. And the C4.5 decision tree algorithm provides a benchmark of the classification performance for this data set.
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.
Resumo:
253 p.