975 resultados para Exact Solutions
Removal of endotoxin from human serum albumin solutions by hydrophobic and cationic charged membrane
Resumo:
A novel matrix of macropore cellulose membrane was prepared by chemical graft, and immobilized the cationic charged groups as affinity ligands. The prepared membrane Fan be used for the removal of endotoxin from human serum albumin (HSA) solutions. With a cartridge of 20 sheets affinity membrane of 47 mm diameter, the endotoxin level in HSA solution can be reduced ro 0.027 eu/mL. Recovery of HSA was over 95%.
Resumo:
A unique matching is a stated objective of most computational theories of stereo vision. This report describes situations where humans perceive a small number of surfaces carried by non-unique matching of random dot patterns, although a unique solution exists and is observed unambiguously in the perception of isolated features. We find both cases where non-unique matchings compete and suppress each other and cases where they are all perceived as transparent surfaces. The circumstances under which each behavior occurs are discussed and a possible explanation is sketched. It appears that matching reduces many false targets to a few, but may still yield multiple solutions in some cases through a (possibly different) process of surface interpolation.
Resumo:
Aqueous solutions of amphiphilic polymers usually comprise of inter- and intramolecular associations of hydrophobic groups often leading to a formation of a rheologically significant reversible network at low concentrations that can be identified using techniques such as static light scattering and rheometry. However, in most studies published till date comparing water soluble polymers with their respective amphiphilic derivatives, it has been very difficult to distinguish between the effects of molecular mass versus hydrophobic associations on hydrodynamic (intrinsic viscosity [g]) and thermodynamic parameters (second virial coefficient A2), owing to the differences between their degrees of polymerization. This study focuses on the dilute and semi-dilute solutions of hydroxyethyl cellulose (HEC) and its amphiphilic derivatives (hmHEC) of the same molecular mass, along with other samples having a different molecular mass using capillary viscometry, rheometry and static light scattering. The weight average molecular masses (MW) and their distributions for the nonassociative HEC were determined using size exclusion chromatography. Various empirical approaches developed by past authors to determine [g] from dilute solution viscometry data have been discussed. hmHEC with a sufficiently high degree of hydrophobic modification was found to be forming a rheologically significant network in dilute solutions at very low concentrations as opposed to the hmHEC with a much lower degree of hydrophobic modification which also enveloped the hydrophobic groups inside the supramolecular cluster as shown by their [g] and A2. The ratio A2MW/[g], which takes into account hydrodynamic as well as thermodynamic parameters, was observed to be less for associative polymers compared to that of the non-associative polymers.
Resumo:
Soldatova, L. N., Aubrey, W., King, R. D., Clare, A. J. (2008). The EXACT description of biomedical protocols. Bioinformatics, 24 (13), i295-i303 Sponsorship: BBSRC / RAEng / EPSRC specialissue: ISMB
Resumo:
Plakhov, A.Y., (2004) 'Precise solutions of the one-dimensional Monge-Kantorovich problem', Sbornik: Mathematics 195(9) pp.1291-1307 RAE2008
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.