987 resultados para Aggregation process
Resumo:
Die technische Silikatproduktion erfordert in der Regel hohe Temperaturen und extreme pH-Werte. In der Natur hingegen haben insbesondere Kieselschwämme die außergewöhnliche Fähigkeit, ihr Silikatskelett, das aus einzelnen sogenannten Spiculae besteht, enzymatisch mittels des Proteins Silicatein zu synthetisieren. rnIm Inneren der Spiculae, im zentralen Kanal, befindet sich das Axialfilament, welches hauptsächlich aus Silicatein-α aufgebaut ist. Mittels Antikörperfärbungen und Elektronenmikroskopischen Analysen konnte festgestellt werden, dass Silicatein in mit Kieselsäure-gefüllten Zellorganellen (silicasomes) nachzuweisen ist. Mittels dieser Vakuolen kann das Enzym und die Kieselsäure aus der Zelle zu den Spiculae im extrazellulären Raum befördert werden, wo diese ihre endgültige Länge und Dicke erreichen. Zum ersten Mal konnte nachgewiesen werden, dass rekombinant hergestelltes Silicatein-α sowohl als Siliciumdioxid-Polymerase als auch Siliciumdioxid-Esterase wirkt. Mittels Massenspektroskopie konnte die enzymatische Polymerisation von Kieselsäure nachverfolgt werden. Durch Spaltung der Esterbindung des künstlichen Substrates Bis(p-aminophenoxy)-dimethylsilan war es möglich kinetische Parameter der Siliciumdioxid-Esterase-Aktivität des rekombinanten Silicateins zu ermitteln.rnZu den größten biogenen Silikatstukuren auf der Erde gehören die Kieselnadeln der Schwammklasse Hexactinellida. Nadelextrakte aus den Schwammklassen Demospongien (S. domuncula) und Hexactinellida (M. chuni) wurden miteinander verglichen um die potentielle Existenz von Silicatein oder Silicatein-ähnliche Molekülen und die dazu gehörige proteolytischen Aktivität nachzuweisen. Biochemische Analysen zeigten, dass das 27 kDA große isolierte Polypeptid in Monoraphis mehrere gemeinsame Merkmale mit den Silicateinen der Demospongien teilt. Dazu gehören die Größe und die Proteinase-Aktivität. rnUm die Frage zu klären, ob das axiale Filament selbst zur Formbildung der Skelettelemente beiträgt, wurde ein neues mildes Extraktionsverfahren eingeführt. Dieses Verfahren ermöglichte die Solubilisierung des nativen Silicateins aus den Spiculae. Die isolierten Silicateine lagen als Monomere (24 kDa) vor, die Dimere durch nicht-kovalente Bindungen ausbildeten. Darüber hinaus konnten durch PAGE-Gelelektrophorese Tetramere (95 kDa) und Hexamere (135 kDa) nachgewiesen werden. Die Monomere zeigten eine beträchtliche proteolytische Aktivität, die sich während der Polymerisationsphase des Proteins weiter erhöhte. Mit Hilfe der Lichtmikroskopie und Elektronenmikroskopie (TEM) konnte die Assemblierung der Proteine zu filamentartigen Strukturen gezeigt werden. Die Selbstorganisation der Silicatein-α-Monomeren scheint eine Basis für Form- und Musterbildung der wachsenden Nadeln zu bilden.rn Um die Rolle des kürzlich entdeckten Proteins Silintaphin-1, ein starker Interaktionspartner des Silicatein-α, während der Biosilifizierung zu klären, wurden Assemblierungs-Experimente mit den rekombinanten Proteinen in vitro durchgeführt. Zusätzlich wurde deren Effekt auf die Biosilikatsynthese untersucht. Elektronenmikroskopische Analysen ergaben, dass rekombinantes Silicatein-α zufällig verteilte Aggregate bildet, während die Koinkubation beider Proteine (molekulares Verhältnis 4:1) über fraktal artige Strukturen zu Filamenten führt. Auch die enzymatische Aktivität der Silicatein-α-vermittelte Biosilikatsynthese erhöhte sich in Gegenwart von Silintaphin-1 um das 5,3-fache. rn
Resumo:
Understanding the origins of the mechanical properties and its correlation withrnthe microstructure of gel systems is of great scientific and industrial interest. Inrngeneral, colloidal gels can be classified into chemical and physical gels, accordingrnto the life time of the network bonds. The characteristic di↵erences in gelationrndynamics can be observed with rheological measurements.rnAs a model system, a mixture of sodium silicate and low concentration sulfuric acidrnwas used. Nano-sized silica particles grow and aggregate to a system-spanning gelrnnetwork. The influence of the finite solubility of silica at high pH on the gelationrnwas studied with classical and piezo rheometer. The storage modulus of therngel grew logarithmically with time with two distinct growth laws. A relaxationrnat low frequency was observed in the frequency dependent measurements. I attributernthese two behaviors as a sign of structural rearrangements due to the finiternsolubility of silica at high pH. The reaction equilibrium between formation andrndissolution of bonds leads to a finite life time of the bonds and behavior similar tornphysical gel. The frequency dependence was more pronounced for lower water concentrations,rnhigher temperatures and shorter reaction times. With two relaxationrnmodels, I deduced characteristic relaxation times from the experimental data. Besidesrnrheology, the evolution of silica gels at high pH on di↵erent length scales wasrnstudied by NMR and dynamic light scattering. The results revealed that the primaryrnparticles existed already in sodium silicate and aggregated after the mixingrnof reactants due to a chemical reaction. Throughout the aggregation process thernsystem was in its chemical reaction equilibrium. Applying large oscillatory shearrnstrain to the gel allowed for modifying the gel modulus. The e↵ect of shear andrnshear history on the rheological properties of the gel were investigated. The storagernmodulus of the final gel increased with increasing strain. This behavior can be explained with (i) shear-induced aggregate compaction and (ii) combination ofrnbreakage and new formation of bonds.rnIn comparison with the physical gel-like behavior of the silica gel at high pH, typicalrnchemical gel features were exhibited by other gels formed from various chemicalrnreactions. Influences of the chemical structure modification on the gelation wererninvestigated with the piezo-rheometer. The external stimuli can be applied to tunernthe mechanical properties of the gel systems.
Resumo:
Certain fatty acid N-alkyl amides from the medicinal plant Echinacea activate cannabinoid type-2 (CB2) receptors. In this study we show that the CB2-binding Echinacea constituents dodeca-2E,4E-dienoic acid isobutylamide (1) and dodeca-2E,4E,8Z,10Z-tetraenoic acid isobutylamide (2) form micelles in aqueous medium. In contrast, micelle formation is not observed for undeca-2E-ene-8,10-diynoic acid isobutylamide (3), which does not bind to CB2, or structurally related endogenous cannabinoids, such as arachidonoyl ethanolamine (anandamide). The critical micelle concentration (CMC) range of 1 and 2 was determined by fluorescence spectroscopy as 200-300 and 7400-10000 nM, respectively. The size of premicelle aggregates, micelles, and supermicelles was studied by dynamic light scattering. Microscopy images show that compound 1, but not 2, forms globular and rod-like supermicelles with radii of approximately 75 nm. The self-assembling N-alkyl amides partition between themselves and the CB2 receptor, and aggregation of N-alkyl amides thus determines their in vitro pharmacological effects. Molecular mechanics by Monte Carlo simulations of the aggregation process support the experimental data, suggesting that both 1 and 2 can readily aggregate into premicelles, but only 1 spontaneously assembles into larger aggregates. These findings have important implications for biological studies with this class of compounds.
Resumo:
Los incendios forestales son la principal causa de mortalidad de árboles en la Europa mediterránea y constituyen la amenaza más seria para los ecosistemas forestales españoles. En la Comunidad Valenciana, diariamente se despliega cerca de un centenar de vehículos de vigilancia, cuya distribución se apoya, fundamentalmente, en un índice de riesgo de incendios calculado en función de las condiciones meteorológicas. La tesis se centra en el diseño y validación de un nuevo índice de riesgo integrado de incendios, especialmente adaptado a la región mediterránea y que facilite el proceso de toma de decisiones en la distribución diaria de los medios de vigilancia contra incendios forestales. El índice adopta el enfoque de riesgo integrado introducido en la última década y que incluye dos componentes de riesgo: el peligro de ignición y la vulnerabilidad. El primero representa la probabilidad de que se inicie un fuego y el peligro potencial para que se propague, mientras que la vulnerabilidad tiene en cuenta las características del territorio y los efectos potenciales del fuego sobre el mismo. Para el cálculo del peligro potencial se han identificado indicadores relativos a los agentes naturales y humanos causantes de incendios, la ocurrencia histórica y el estado de los combustibles, extremo muy relacionado con la meteorología y las especies. En cuanto a la vulnerabilidad se han empleado indicadores representativos de los efectos potenciales del incendio (comportamiento del fuego, infraestructuras de defensa), como de las características del terreno (valor, capacidad de regeneración…). Todos estos indicadores constituyen una estructura jerárquica en la que, siguiendo las recomendaciones de la Comisión europea para índices de riesgo de incendios, se han incluido indicadores representativos del riesgo a corto plazo y a largo plazo. El cálculo del valor final del índice se ha llevado a cabo mediante la progresiva agregación de los componentes que forman cada uno de los niveles de la estructura jerárquica del índice y su integración final. Puesto que las técnicas de decisión multicriterio están especialmente orientadas a tratar con problemas basados en estructuras jerárquicas, se ha aplicado el método TOPSIS para obtener la integración final del modelo. Se ha introducido en el modelo la opinión de los expertos, mediante la ponderación de cada uno de los componentes del índice. Se ha utilizado el método AHP, para obtener las ponderaciones de cada experto y su integración en un único peso por cada indicador. Para la validación del índice se han empleado los modelos de Ecuaciones de Estimación Generalizadas, que tienen en cuenta posibles respuestas correlacionadas. Para llevarla a cabo se emplearon los datos de oficiales de incendios ocurridos durante el período 1994 al 2003, referenciados a una cuadrícula de 10x10 km empleando la ocurrencia de incendios y su superficie, como variables dependientes. Los resultados de la validación muestran un buen funcionamiento del subíndice de peligro de ocurrencia con un alto grado de correlación entre el subíndice y la ocurrencia, un buen ajuste del modelo logístico y un buen poder discriminante. Por su parte, el subíndice de vulnerabilidad no ha presentado una correlación significativa entre sus valores y la superficie de los incendios, lo que no descarta su validez, ya que algunos de sus componentes tienen un carácter subjetivo, independiente de la superficie incendiada. En general el índice presenta un buen funcionamiento para la distribución de los medios de vigilancia en función del peligro de inicio. No obstante, se identifican y discuten nuevas líneas de investigación que podrían conducir a una mejora del ajuste global del índice. En concreto se plantea la necesidad de estudiar más profundamente la aparente correlación que existe en la provincia de Valencia entre la superficie forestal que ocupa cada cuadrícula de 10 km del territorio y su riesgo de incendios y que parece que a menor superficie forestal, mayor riesgo de incendio. Otros aspectos a investigar son la sensibilidad de los pesos de cada componente o la introducción de factores relativos a los medios potenciales de extinción en el subíndice de vulnerabilidad. Summary Forest fires are the main cause of tree mortality in Mediterranean Europe and the most serious threat to the Spanisf forest. In the Spanish autonomous region of Valencia, forest administration deploys a mobile fleet of 100 surveillance vehicles in forest land whose allocation is based on meteorological index of wildlandfire risk. This thesis is focused on the design and validation of a new Integrated Wildland Fire Risk Index proposed to efficient allocation of vehicles and specially adapted to the Mediterranean conditions. Following the approaches of integrated risk developed last decade, the index includes two risk components: Wildland Fire Danger and Vulnerability. The former represents the probability a fire ignites and the potential hazard of fire propagation or spread danger, while vulnerability accounts for characteristics of the land and potential effects of fire. To calculate the Wildland Fire Danger, indicators of ignition and spread danger have been identified, including human and natural occurrence agents, fuel conditions, historical occurrence and spread rate. Regarding vulnerability se han empleado indicadores representativos de los efectos potenciales del incendio (comportamiento del fuego, infraestructurasd de defensa), como de las características del terreno (valor, capacidad de regeneración…). These indicators make up the hierarchical structure for the index, which, following the criteria of the European Commission both short and long-term indicators have been included. Integration consists of the progressive aggregation of the components that make up every level in risk the index and, after that, the integration of these levels to obtain a unique value for the index. As Munticriteria methods are oriented to deal with hierarchically structured problems and with situations in which conflicting goals prevail, TOPSIS method is used in the integration of components. Multicriteria methods were also used to incorporate expert opinion in weighting of indicators and to carry out the aggregation process into the final index. The Analytic Hierarchy Process method was used to aggregate experts' opinions on each component into a single value. Generalized Estimation Equations, which account for possible correlated responses, were used to validate the index. Historical records of daily occurrence for the period from 1994 to 2003, referred to a 10x10-km-grid cell, as well as the extent of the fires were the dependant variables. The results of validation showed good Wildland Fire Danger component performance, with high correlation degree between Danger and occurrence, a good fit of the logistic model used and a good discrimination power. The vulnerability component has not showed a significant correlation between their values and surface fires, which does not mean the index is not valid, because of the subjective character of some of its components, independent of the surface of the fires. Overall, the index could be used to optimize the preventing resources allocation. Nevertheless, new researching lines are identified and discussed to improve the overall performance of the index. More specifically the need of study the inverse relationship between the value of the wildfire Fire Danger component and the forested surface of each 10 - km cell is set out. Other points to be researched are the sensitivity of the index component´s weight and the possibility of taking into account indicators related to fire fighting resources to make up the vulnerability component.
Resumo:
The dynamics of the survival of recruiting fish are analyzed as evolving random processes of aggregation and mortality. The analyses draw on recent advances in the physics of complex networks and, in particular, the scale-free degree distribution arising from growing random networks with preferential attachment of links to nodes. In this study simulations were conducted in which recruiting fish 1) were subjected to mortality by using alternative mortality encounter models and 2) aggregated according to random encounters (two schools randomly encountering one another join into a single school) or preferential attachment (the probability of a successful aggregation of two schools is proportional to the school sizes). The simulations started from either a “disaggregated” (all schools comprised a single fish) or an aggregated initial condition. Results showed the transition of the school-size distribution with preferential attachment evolving toward a scale-free school size distribution, whereas random attachment evolved toward an exponential distribution. Preferential attachment strategies performed better than random attachment strategies in terms of recruitment survival at time when mortality encounters were weighted toward schools rather than to individual fish. Mathematical models were developed whose solutions (either analytic or numerical) mimicked the simulation results. The resulting models included both Beverton-Holt and Ricker-like recruitment, which predict recruitment as a function of initial mean school size as well as initial stock size. Results suggest that school-size distributions during recruitment may provide information on recruitment processes. The models also provide a template for expanding both theoretical and empirical recruitment research.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.
Resumo:
With the extensive use of rating systems in the web, and their significance in decision making process by users, the need for more accurate aggregation methods has emerged. The Naïve aggregation method, using the simple mean, is not adequate anymore in providing accurate reputation scores for items [6 ], hence, several researches where conducted in order to provide more accurate alternative aggregation methods. Most of the current reputation models do not consider the distribution of ratings across the different possible ratings values. In this paper, we propose a novel reputation model, which generates more accurate reputation scores for items by deploying the normal distribution over ratings. Experiments show promising results for our proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
Process view technology is catching more attentions in modern business process management, as it enables the customisation of business process representation. This capability helps improve the privacy protection, authority control, flexible display, etc., in business process modelling. One of approaches to generate process views is to allow users to construct an aggregate on their underlying processes. However, most aggregation approaches stick to a strong assumption that business processes are always well-structured, which is over strict to BPMN. Aiming to build process views for non-well-structured BPMN processes, this paper investigates the characteristics of BPMN structures, tasks, events, gateways, etc., and proposes a formal process view aggregation approach to facilitate BPMN process view creation. A set of consistency rules and construction rules are defined to regulate the aggregation and guarantee the order preservation, structural and behaviour correctness and a novel aggregation technique, called EP-Fragment, is developed to tackle non-well-structured BPMN processes.
Resumo:
The linear polypeptide antibiotic alamethicin is known to form channels in artificial lipid membranes. Synthetic 13- and 17-residue alamethicin fragments, labelled with a fluorescent dansyl group at the N-terminus, have been shown to translocate divalent cations across phospholipid membranes and to uncouple oxidative phosphorylation in rat liver mitochondria, in a manner analogous to the parent peptides. From studies of the aqueous phase aggregation behavior of the peptides, as well as their interaction with rat liver mitochondria, it is concluded that the interaction of the peptides with membranes is a complex process, probably involving both aqueous and membrane phase aggregation.
Resumo:
ALUMINIUM exposure has been shown to result in aggregation of microtubule-associated protein tau in vitro. In the light of recent observations that the native random structure of tau protein is maintained in its monomeric and dimeric states as well as in the paired helical filaments characteristic of Alzheimer's disease, it is likely that factors playing a causative role in neurofibrillary pathology would not drastically alter the native conformation of tau protein. We have studied the interaction of tau protein with aluminium using circular dichroism (CD) and 27(Al) NMR spectroscopy. The CD studies revealed a five-fold increase in the observed ellipticity of the tau-aluminium assembly. The increase in elipticity was not associated with a change in the general conformation of the protein and was most likely due to an aggregation of the tau protein induced by aluminium. Al-27 NMR spectroscopy confirmed the binding of aluminium to tau protein. Hyperphosphorylation of tau in Alzheimer's disease is known to be associated with defective microtubule assembly in this condition. Abnormally phosphorylated tau exists in a polymerized form in the paired helical filaments (PHF) which constitute the neurofibrillary tangles found in Alzheimer's disease. While it is hypothesized that its altered biophysical characteristics render abnormally phosphorylated tau resistant to proteolysis, causing the formation of stable deposits,the sequence of events resulting in the polymerization of tau are little understood, as are the additional factors or modifications required for tills process. Based on the results of our spectroscopic studies, a model for the sequence of events occurring in neurofibrillary pathology is proposed.
Resumo:
Owing to widespread applications, synthesis and characterization of silver nanoparticles is recently attracting considerable attention. Increasing environmental concerns over chemical synthesis routes have resulted in attempts to develop biomimetic approaches. One of them is synthesis using plant parts, which eliminates the elaborate process of maintaining the microbial culture and often found to be kinetically favourable than other bioprocesses. The present study deals with investigating the effect of process variables like reductant concentrations, reaction pH, mixing ratio of the reactants and interaction time on the morphology and size of silver nanoparticles synthesized using aqueous extract of Azadirachta indica (Neem) leaves. The formation of crystalline silver nanoparticles was confirmed using X-ray diffraction analysis. By means of UV spectroscopy, Scanning and Transmission Electron Microscopy techniques, it was observed that the morphology and size of the nanoparticles were strongly dependent on the process parameters. Within 4 h interaction period, nanoparticles below 20-nm-size with nearly spherical shape were produced. On increasing interaction time (ageing) to 66 days, both aggregation and shape anisotropy (ellipsoidal, polyhedral and capsular) of the particles increased. In alkaline pH range, the stability of cluster distribution increased with a declined tendency for aggregation of the particles. It can be inferred from the study that fine tuning the bioprocess parameters will enhance possibilities of desired nano-product tailor made for particular applications.
Towards an Understanding of the Influence of Sedimentation on Colloidal Aggregation by Peclet Number
Resumo:
The Peclet number is a useful index to estimate the importance of sedimentation as compared to the Brownian motion. However, how to choose the characteristic length scale for the Peclet number evaluation is rather critical because the diffusion length increases as the square root of the time whereas the drifting length is linearly related to time. Our Brownian dynamics simulation shows that the degree of sedimentation influence on the coagulation decreases when the dispersion volume fraction increases. Therefore using a fixed length, such as the diameter of particle, as the characteristic length scale for Peclet number evaluation is not a good choice when dealing with the influence of sedimentation on coagulation. The simulations demonstrated that environmental factors in the coagulation process, such as dispersion volume fraction and size distribution, should be taken into account for more reasonable evaluation of the sedimentation influence.
Resumo:
The supra-molecular self-assembly of peptides and proteins is a process which underlies a range of normal and aberrant biological pathways in nature, but one which remains challenging to monitor in a quantitative way. We discuss the experimental details of an approach to this problem which involves the direct measurement in vitro of mass changes of the aggregates as new molecules attach to them. The required mass sensitivity can be achieved by the use of a quartz crystal transducer-based microbalance. The technique should be broadly applicable to the study of protein aggregation, as well as to the identification and characterisation of inhibitors and modulators of this process.