949 resultados para Display designs
Resumo:
The antibody display technology (ADT) such as phage display (PD) has substantially improved the production of monoclonal antibodies (mAbs) and Ab fragments through bypassing several limitations associated with the traditional approach of hybridoma technology. In the current study, we capitalized on the PD technology to produce high affinity single chain variable fragment (scFv) against tumor necrosis factor-alpha (TNF- α), which is a potent pro-inflammatory cytokine and plays important role in various inflammatory diseases and malignancies. To pursue production of scFv antibody fragments against human TNF- α, we performed five rounds of biopanning using stepwise decreased amount of TNF-α (1 to 0.1 μ g), a semi-synthetic phage antibody library (Tomlinson I + J) and TG1 cells. Antibody clones were isolated and selected through enzyme-linked immunosorbent assay (ELISA) screening. The selected scFv antibody fragments were further characterized by means of ELISA, PCR, restriction fragment length polymorphism (RFLP) and Western blot analyses as well as fluorescence microscopy and flow cytometry. Based upon binding affinity to TNF-α , 15 clones were selected out of 50 positive clones enriched from PD in vitro selection. The selected scFvs displayed high specificity and binding affinity with Kd values at nm range to human TNF-α . The immunofluorescence analysis revealed significant binding of the selected scFv antibody fragments to the Raji B lymphoblasts. The effectiveness of the selected scFv fragments was further validated by flow cytometry analysis in the lipopolysaccharide (LPS) treated mouse fibroblast L929 cells. Based upon these findings, we propose the selected fully human anti-TNF-α scFv antibody fragments as potential immunotherapy agents that may be translated into preclinical/clinical applications.
Resumo:
BACKGROUND: The past three decades have seen rapid improvements in the diagnosis and treatment of most cancers and the most important contributor has been research. Progress in rare cancers has been slower, not least because of the challenges of undertaking research. SETTINGS: The International Rare Cancers Initiative (IRCI) is a partnership which aims to stimulate and facilitate the development of international clinical trials for patients with rare cancers. It is focused on interventional--usually randomized--clinical trials with the clear goal of improving outcomes for patients. The key challenges are organisational and methodological. A multi-disciplinary workshop to review the methods used in ICRI portfolio trials was held in Amsterdam in September 2013. Other as-yet unrealised methods were also discussed. RESULTS: The IRCI trials are each presented to exemplify possible approaches to designing credible trials in rare cancers. Researchers may consider these for use in future trials and understand the choices made for each design. INTERPRETATION: Trials can be designed using a wide array of possibilities. There is no 'one size fits all' solution. In order to make progress in the rare diseases, decisions to change practice will have to be based on less direct evidence from clinical trials than in more common diseases.
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.
Resumo:
In the health domain, the field of rehabilitation suffers from a lack specialized staff while hospital costs only increase. Worse, almost no tools are dedicated to motivate patients or help the personnel to carry out monitoring of therapeutic exercises. This paper demonstrates the high potential that can bring the virtual reality with a platform of serious games for the rehabilitation of the legs involving a head-mounted display and haptic robot devices. We first introduce SG principles and the current context regarding rehabilitation interventions followed by the description of an original haptic device called Lambda Health System. The architecture of the model is then detailed, including communication specifications showing that lag is imperceptible for user (60Hz). Finally, four serious games for rehabilitation using haptic robots and/or HMD were tested by 33 health specialists.
Resumo:
The objective of this thesis work is to describe the Conceptual Design process of an embedded electronic display device. The work presents the following sub processes: definition of device specifications, introduction to the technological alternatives for system components and their comparison, comparative photometric measurements of selected display panels, and the design and building of a functional concept prototype. This work focuses mainly on electronics design, albeit the mechanical issues and fields of the software architecture that significantly affect the decisions are also discussed when necessary. The VESA Flat Panel Display Measurement (FPDM) 2.0 Standard was applied to the appropriate extent into photometric measurements. The results were analyzed against the requirement standards of a customer-specific display development project. An Active Matrix LCD was selected as the display of concept prototype, but also the excellent visual characteristics of Active Matrix OLED technology were noted. Should the reliability of the OLED products be significantly improved in the future, utilizing such products in the described application must be reconsidered.
Resumo:
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprising ABAB and multiple baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
Resumo:
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.
Resumo:
The optimization of the anaerobic degradation of the azo dye Remazol golden yellow RNL was performed according to multivariate experimental designs: a 2² full-factorial design and a central composite design (CCD). The CCD revealed that the best incubation conditions (90% color removal) for the degradation of the azo dye (50 mg L- 1) were achieved with 350 mg L- 1 of yeast extract and 45 mL of anaerobic supernatant (free cell extract) produced from the incubation of 650 mg L- 1 of anaerobic microorganisms and 250 mg L- 1 of glucose. A first-order kinetics model best fit the experimental data (k = 0.0837 h- 1, R² = 0.9263).
Resumo:
Antibodies are natural binding proteins produced in vertebrates as a response to invading pathogens and foreign substances. Because of their capability for tight and specific binding, antibodies have found use as binding reagents in research and diagnostics. Properties of cloned recombinant antibodies can be further improved by means of in vitro evolution, combining mutagenesis with subsequent phage display selection. It is also possible to isolate entirely new antibodies from vast naïve or synthetic antibody libraries by phage display. In this study, library techniques and phage display selection were applied in order to optimise binding scaffolds and antigen recognition of antibodies, and to evolve new and improved bioaffinity reagents. Antibody libraries were generated by random and targeted mutagenesis. Expression and stability were mainly optimised by the random methods whereas targeted randomisation of the binding site residues was used for optimising the binding properties. Trinucleotide mutagenesis allowed design of defined randomisation patterns for a synthetic antibody library. Improved clones were selected by phage display. Capture by a specific anti- DHPS antibody was exploited in the selection of improved phage display of DHPS. Efficient selection for stability was established by combining phage display selection with denaturation under reducing conditions. Broad-specific binding of a generic anti-sulfonamide antibody was improved by selection with one of the weakest binding sulfonamides. In addition, p9 based phage display was studied in affinity selection from the synthetic library. A TIM barrel protein DHPS was engineered for efficient phage display by combining cysteinereplacement with random mutagenesis. The resulting clone allows use of phage display in further engineering of DHPS and possibly use as an alternative-binding scaffold. An anti-TSH scFv fragment, cloned from a monoclonal antibody, was engineered for improved stability to better suite an immunoassay. The improved scFv tolerates 8 – 9 °C higher temperature than the parental scFv and should have sufficient stability to be used in an immunoanalyser with incubation at 36 °C. The anti-TSH scFv fragment was compared with the corresponding Fab fragment and the parental monoclonal antibody as a capturing reagent in a rapid 5-min immunoassay for TSH. The scFv fragment provided some benefits over the conventionally used Mab in anayte-binding capacity and assay kinetics. However, the recombinant Fab fragment, which had similar kinetics to the scFv, provided a more sensitive and reliable assay than the scFv. Another cloned scFv fragment was engineered in order to improve broad-specific recognition of sulfonamides. The improved antibody detects different sulfonamides at concentrations below the maximum residue limit (100 μg/kg in EU and USA) and allows simultaneous screening of different sulfonamide drug residues. Finally, a synthetic antibody library was constructed and new antibodies were generated and affinity matured entirely in vitro. These results illuminate the possibilities of phage display and antibody engineering for generation and optimisation of binding reagents in vitro and indicate the potential of recombinant antibodies as affinity reagents in immunoassays.
Resumo:
Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.