17 resultados para Ile de Montréal
Resumo:
Modulation of protein binding specificity is important for basic biology and for applied science. Here we explore how binding specificity is conveyed in PDZ (postsynaptic density protein-95/discs large/zonula occludens-1) domains, small interaction modules that recognize various proteins by binding to an extended C terminus. Our goal was to engineer variants of the Erbin PDZ domain with altered specificity for the most C-terminal position (position 0) where a Val is strongly preferred by the wild-type domain. We constructed a library of PDZ domains by randomizing residues in direct contact with position 0 and in a loop that is close to but does not contact position 0. We used phage display to select for PDZ variants that bind to 19 peptide ligands differing only at position 0. To verify that each obtained PDZ domain exhibited the correct binding specificity, we selected peptide ligands for each domain. Despite intensive efforts, we were only able to evolve Erbin PDZ domain variants with selectivity for the aliphatic C-terminal side chains Val, Ile and Leu. Interestingly, many PDZ domains with these three distinct specificities contained identical amino acids at positions that directly contact position 0 but differed in the loop that does not contact position 0. Computational modeling of the selected PDZ domains shows how slight conformational changes in the loop region propagate to the binding site and result in different binding specificities. Our results demonstrate that second-sphere residues could be crucial in determining protein binding specificity.
Resumo:
Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2008] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2008] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong l2 regularisation makes most standard learners SLN-robust. Experiments confirm the unhinged loss’ SLN-robustness.