859 resultados para robust hedging
Resumo:
Protein scaffolds that support molecular recognition have multiple applications in biotechnology. Thus, protein frames with robust structural cores but adaptable surface loops are in continued demand. Recently, notable progress has been made in the characterization of Ig domains of intracellular origin--in particular, modular components of the titin myofilament. These Ig belong to the I(intermediate)-type, are remarkably stable, highly soluble and undemanding to produce in the cytoplasm of Escherichia coli. Using the Z1 domain from titin as representative, we show that the I-Ig fold tolerates the drastic diversification of its CD loop, constituting an effective peptide display system. We examine the stability of CD-loop-grafted Z1-peptide chimeras using differential scanning fluorimetry, Fourier transform infrared spectroscopy and nuclear magnetic resonance and demonstrate that the introduction of bioreactive affinity binders in this position does not compromise the structural integrity of the domain. Further, the binding efficiency of the exogenous peptide sequences in Z1 is analyzed using pull-down assays and isothermal titration calorimetry. We show that an internally grafted, affinity FLAG tag is functional within the context of the fold, interacting with the anti-FLAG M2 antibody in solution and in affinity gel. Together, these data reveal the potential of the intracellular Ig scaffold for targeted functionalization.
Resumo:
Conventional liquid liquid extraction (LLE) methods require large volumes of fluids to achieve the desired mass transfer of a solute, which is unsuitable for systems dealing with a low volume or high value product. An alternative to these methods is to scale down the process. Millifluidic devices share many of the benefits of microfluidic systems, including low fluid volumes, increased interfacial area-to-volume ratio, and predictability. A robust millifluidic device was created from acrylic, glass, and aluminum. The channel is lined with a hydrogel cured in the bottom half of the device channel. This hydrogel stabilizes co-current laminar flow of immiscible organic and aqueous phases. Mass transfer of the solute occurs across the interface of these contacting phases. Using a y-junction, an aqueous emulsion is created in an organic phase. The emulsion travels through a length of tubing and then enters the co-current laminar flow device, where the emulsion is broken and each phase can be collected separately. The inclusion of this emulsion formation and separation increases the contact area between the organic and aqueous phases, therefore increasing the area over which mass transfer can occur. Using this design, 95% extraction efficiency was obtained, where 100% is represented by equilibrium. By continuing to explore this LLE process, the process can be optimized and with better understanding may be more accurately modeled. This system has the potential to scale up to the industrial level and provide the efficient extraction required with low fluid volumes and a well-behaved system.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.
Resumo:
Cu is an essential nutrient for man, but can be toxic if intakes are too high. In sensitive populations, marginal over- or under-exposure can have detrimental effects. Malnourished children, the elderly, and pregnant or lactating females may be susceptible for Cu deficiency. Cu status and exposure in the population can currently not be easily measured, as neither plasma Cu nor plasma cuproenzymes reflect Cu status precisely. Some blood markers (such as ceruloplasmin) indicate severe Cu depletion, but do not inversely respond to Cu excess, and are not suitable to indicate marginal states. A biomarker of Cu is needed that is sensitive to small changes in Cu status, and that responds to Cu excess as well as deficiency. Such a marker will aid in monitoring Cu status in large populations, and will help to avoid chronic health effects (for example, liver damage in chronic toxicity, osteoporosis, loss of collagen stability, or increased susceptibility to infections in deficiency). The advent of high-throughput technologies has enabled us to screen for potential biomarkers in the whole proteome of a cell, not excluding markers that have no direct link to Cu. Further, this screening allows us to search for a whole group of proteins that, in combination, reflect Cu status. The present review emphasises the need to find sensitive biomarkers for Cu, examines potential markers of Cu status already available, and discusses methods to identify a novel suite of biomarkers.
Resumo:
Potential treatment strategies of neurodegenerative and other diseases with stem cells derived from nonembryonic tissues are much less subjected to ethical criticism than embryonic stem cell-based approaches. Here we report the isolation of inner ear stem cells, which may be useful in cell replacement therapies for hearing loss, after protracted postmortem intervals. We found that neonatal murine inner ear tissues, including vestibular and cochlear sensory epithelia, display remarkably robust cellular survival, even 10 days postmortem. Similarly, isolation of sphere-forming stem cells was possible up to 10 days postmortem. We detected no difference in the proliferation and differentiation potential between stem cells isolated directly after death and up to 5 days postmortem. At longer postmortem intervals, we observed that the potency of sphere-derived cells to spontaneously differentiate into mature cell types diminishes prior to the cells losing their potential for self-renewal. Three-week-old mice also displayed sphere-forming stem cells in all inner ear tissues investigated up to 5 days postmortem. In summary, our results demonstrate that postmortem murine inner ear tissue is suited for isolation of stem cells.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
Similarity measure is one of the main factors that affect the accuracy of intensity-based 2D/3D registration of X-ray fluoroscopy to CT images. Information theory has been used to derive similarity measure for image registration leading to the introduction of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. Previous attempt to incorporate spatial information into mutual information either requires computing the entropy of higher dimensional probability distributions, or is not robust to outliers. In this paper, we show how to incorporate spatial information into mutual information without suffering from these problems. Using a variational approximation derived from the Kullback-Leibler bound, spatial information can be effectively incorporated into mutual information via energy minimization. The resulting similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on datasets of two applications: (a) intra-operative patient pose estimation from a few (e.g. 2) calibrated fluoroscopic images, and (b) post-operative cup alignment estimation from single X-ray radiograph with gonadal shielding.
Resumo:
The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).
Resumo:
Transcriptomics could contribute significantly to the early and specific diagnosis of rejection episodes by defining 'molecular Banff' signatures. Recently, the description of pathogenesis-based transcript sets offered a new opportunity for objective and quantitative diagnosis. Generating high-quality transcript panels is thus critical to define high-performance diagnostic classifier. In this study, a comparative analysis was performed across four different microarray datasets of heterogeneous sample collections from two published clinical datasets and two own datasets including biopsies for clinical indication, and samples from nonhuman primates. We characterized a common transcriptional profile of 70 genes, defined as acute rejection transcript set (ARTS). ARTS expression is significantly up-regulated in all AR samples as compared with stable allografts or healthy kidneys, and strongly correlates with the severity of Banff AR types. Similarly, ARTS were tested as a classifier in a large collection of 143 independent biopsies recently published by the University of Alberta. Results demonstrate that the 'in silico' approach applied in this study is able to identify a robust and reliable molecular signature for AR, supporting a specific and sensitive molecular diagnostic approach for renal transplant monitoring.