968 resultados para decomposition of polymeric precursor method (DPP)
Resumo:
An asymptotic analysîs of the Eberstein-Glassman kinetic mechanlsm for the thermal décomposition of hydrazine is carried out. It is shown that at températures near 800°K and near 1000°K,and for hydrazine molar fractions of the order of unity, 10-2 the entire kinetics reduces to a single, overall reaction. Characteristic times for the chemical relaxation of ail active, intermediate species produced in the décomposition, and for the overall reaction, are obtained. Explicit expressions for the overall reaction rate and stoichiometry are given as functions of température, total molar concentration (or pressure)and hydrazine molar fraction. Approximate, patched expressions can then be obtained for values of température and hydrazine molar fraction between 750 and 1000°K, and 1 and 10-3 respectively.
Resumo:
El paralelo gráfico ha sido -y continúa siendo- un excepcional método para conocer, aprender, investigar y difundir la forma arquitectónica y urbana. Aquí intentamos esbozar los principios que rigen su elaboración y echar un leve vistazo a alguno de los jalones de su intensa historia, que merecería una atención más pausada.
Resumo:
In this paper we define the notion of an axiom dependency hypergraph, which explicitly represents how axioms are included into a module by the algorithm for computing locality-based modules. A locality-based module of an ontology corresponds to a set of connected nodes in the hypergraph, and atoms of an ontology to strongly connected components. Collapsing the strongly connected components into single nodes yields a condensed hypergraph that comprises a representation of the atomic decomposition of the ontology. To speed up the condensation of the hypergraph, we first reduce its size by collapsing the strongly connected components of its graph fragment employing a linear time graph algorithm. This approach helps to significantly reduce the time needed for computing the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies. We also demonstrate a significant improvement in the time needed to extract locality-based modules from an axiom dependency hypergraph and its condensed version.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
The effect of type of fiber, site of fermetation, method for quantifying insoluble and soluble dietary fiber, and their correction for intestinal mucin on fiber digestibility were examined in rabbits. Three diets differing in soluble fiber were formulated (8.5% soluble fiber, on DM basis, in the low soluble fiber [LSF] diet; 10.2% in the medium soluble fiber [MSF] diet; and 14.5% in the high soluble fiber [HSF] diet). They were obtained by replacing half of the dehydrated alfalfa in the MSF diet with a mixture of beet and apple pulp (HSF diet) or with a mix of oat hulls and soybean protein (LSF diet). Thirty rabbits with ileal T-cannulas were used to determine ileal and fecal digestibility. Cecal digestibility was determined by difference between fecal and ileal digestibility. Insoluble fiber was measured as NDF, insoluble dietary fiber (IDF), and in vitro insoluble fiber, whereas soluble fiber was calculated as the difference between total dietary fiber (TDF) and NDF (TDF_NDF), IDF (TDF-IDF), and in vitro insoluble fiber (TDF-in vitro insoluble fiber). The intestinal mucin content was used to correct the TDF and soluble fiber digestibility. Ileal and fecal concentration of mucin increased from the LSF to the HSF diet group (P < 0.01). Once corrected for intestinal mucin, ileal and fecal digestibility of TDF and soluble fiber increased whereas cecal digestibility decreased (P < 0.01). Ileal digestibility of TDF increased from the LSF to the HSF diet group (12.0 vs. 28.1%; P < 0.01), with no difference in the cecum (26.4%), resulting in a higher fecal digestibility from the LSF to the HSF diet group (P < 0.01). Ileal digestibility of insoluble fiber increased from the LSF to the HSF diet group (11.3 vs. 21.0%; P < 0.01), with no difference in the cecum (13.9%) and no effect of fiber method, resulting in a higher fecal digestibility for rabbits fed the HSF diet compared with the MSF and LSF diets groups (P < 0.01).Fecal digestibility of NDF was higher compared with IDF or in vitro insoluble fiber (P < 0.01). Ileal soluble fiber digestibility was higher for the HSF than for the LSF diet group (43.6 vs. 14.5%; P < 0.01) and fiber method did not affect it. Cecal soluble fiber digestibility decreased from the LSF to the HSF diet group (72.1 vs. 49.2%; P < 0.05). The lowest cecal and fecal soluble fiber digestibility was measured using TDF-NDF (P < 0.01). In conclusion, a correction for intestinal mucin is necessary for ileal TDF and soluble fiber digestibility whereas the selection of the fiber method has a minor relevance. The inclusion of sugar beet and apple pulp increased the amount of TDF fermented in the small intestine.
Application of the Boundary Method to the determination of the properties of the beam cross-sections
Resumo:
Using the 3-D equations of linear elasticity and the asylllptotic expansion methods in terms of powers of the beam cross-section area as small parameter different beam theories can be obtained, according to the last term kept in the expansion. If it is used only the first two terms of the asymptotic expansion the classical beam theories can be recovered without resort to any "a priori" additional hypotheses. Moreover, some small corrections and extensions of the classical beam theories can be found and also there exists the possibility to use the asymptotic general beam theory as a basis procedure for a straightforward derivation of the stiffness matrix and the equivalent nodal forces of the beam. In order to obtain the above results a set of functions and constants only dependent on the cross-section of the beam it has to be computed them as solutions of different 2-D laplacian boundary value problems over the beam cross section domain. In this paper two main numerical procedures to solve these boundary value pf'oblems have been discussed, namely the Boundary Element Method (BEM) and the Finite Element Method (FEM). Results for some regular and geometrically simple cross-sections are presented and compared with ones computed analytically. Extensions to other arbitrary cross-sections are illustrated.
Resumo:
We have identified a novel β amyloid precursor protein (βAPP) mutation (V715M-βAPP770) that cosegregates with early-onset Alzheimer’s disease (AD) in a pedigree. Unlike other familial AD-linked βAPP mutations reported to date, overexpression of V715M-βAPP in human HEK293 cells and murine neurons reduces total Aβ production and increases the recovery of the physiologically secreted product, APPα. V715M-βAPP significantly reduces Aβ40 secretion without affecting Aβ42 production in HEK293 cells. However, a marked increase in N-terminally truncated Aβ ending at position 42 (x-42Aβ) is observed, whereas its counterpart x-40Aβ is not affected. These results suggest that, in some cases, familial AD may be associated with a reduction in the overall production of Aβ but may be caused by increased production of truncated forms of Aβ ending at the 42 position.
Resumo:
In epithelial cells, sorting of membrane proteins to the basolateral surface depends on the presence of a basolateral sorting signal (BaSS) in their cytoplasmic domain. Amyloid precursor protein (APP), a basolateral protein implicated in the pathogenesis of Alzheimer’s disease, contains a tyrosine-based BaSS, and mutation of the tyrosine residue results in nonpolarized transport of APP. Here we report identification of a protein, termed PAT1 (protein interacting with APP tail 1), that interacts with the APP-BaSS but binds poorly when the critical tyrosine is mutated and does not bind the tyrosine-based endocytic signal of APP. PAT1 shows homology to kinesin light chain, which is a component of the plus-end directed microtubule-based motor involved in transporting membrane proteins to the basolateral surface. PAT1, a cytoplasmic protein, associates with membranes, cofractionates with APP-containing vesicles, and binds microtubules in a nucleotide-sensitive manner. Cotransfection of PAT1 with a reporter protein shows that PAT1 is functionally linked with intracellular transport of APP. We propose that PAT1 is involved in the translocation of APP along microtubules toward the cell surface.
Stimulation of amyloid precursor protein synthesis by adrenergic receptors coupled to cAMP formation
Resumo:
Amyloid plaques in Alzheimer disease are primarily aggregates of Aβ peptides that are derived from the amyloid precursor protein (APP). Neurotransmitter agonists that activate phosphatidylinositol hydrolysis and protein kinase C stimulate APP processing and generate soluble, non-amyloidogenic APP (APPs). Elevations in cAMP oppose this stimulatory effect and lead to the accumulation of cell-associated APP holoprotein containing amyloidogenic Aβ peptides. We now report that cAMP signaling can also increase cellular levels of APP holoprotein by stimulating APP gene expression in astrocytes. Treatment of astrocytes with norepinephrine or isoproterenol for 24 h increased both APP mRNA and holoprotein levels, and these increases were blocked by the β-adrenergic antagonist propranolol. Treatment with 8-bromo-adenosine 3′,5′-cyclic monophosphate or forskolin for 24 h similarly increased APP holoprotein levels; astrocytes were also transformed into process-bearing cells expressing increased amounts of glial fibrillary acidic protein, suggesting that these cells resemble reactive astrocytes. The increases in APP mRNA and holoprotein in astrocytes caused by cAMP stimulation were inhibited by the immunosuppressant cyclosporin A. Our study suggests that APP overexpression by reactive astrocytes during neuronal injury may contribute to Alzheimer disease neuropathology, and that immunosuppressants can inhibit cAMP activation of APP gene transcription.
Resumo:
The present paper describes the total chemical synthesis of the precursor molecule of the Aequorea green fluorescent protein (GFP). The molecule is made up of 238 amino acid residues in a single polypeptide chain and is nonfluorescent. To carry out the synthesis, a procedure, first described in 1981 for the synthesis of complex peptides, was used. The procedure is based on performing segment condensation reactions in solution while providing maximum protection to the segment. The effectiveness of the procedure has been demonstrated by the synthesis of various biologically active peptides and small proteins, such as human angiogenin, a 123-residue protein analogue of ribonuclease A, human midkine, a 121-residue protein, and pleiotrophin, a 136-residue protein analogue of midkine. The GFP precursor molecule was synthesized from 26 fully protected segments in solution, and the final 238-residue peptide was treated with anhydrous hydrogen fluoride to obtain the precursor molecule of GFP containing two Cys(acetamidomethyl) residues. After removal of the acetamidomethyl groups, the product was dissolved in 0.1 M Tris⋅HCl buffer (pH 8.0) in the presence of DTT. After several hours at room temperature, the solution began to emit a green fluorescence (λmax = 509 nm) under near-UV light. Both fluorescence excitation and fluorescence emission spectra were measured and were found to have the same shape and maxima as those reported for native GFP. The present results demonstrate the utility of the segment condensation procedure in synthesizing large protein molecules such as GFP. The result also provides evidence that the formation of the chromophore in GFP is not dependent on any external cofactor.
Resumo:
The cDNAs of two new human membrane-associated aspartic proteases, memapsin 1 and memapsin 2, have been cloned and sequenced. The deduced amino acid sequences show that each contains the typical pre, pro, and aspartic protease regions, but each also has a C-terminal extension of over 80 residues, which includes a single transmembrane domain and a C-terminal cytosolic domain. Memapsin 2 mRNA is abundant in human brain. The protease domain of memapsin 2 cDNA was expressed in Escherichia coli and was purified. Recombinant memapsin 2 specifically hydrolyzed peptides derived from the β-secretase site of both the wild-type and Swedish mutant β-amyloid precursor protein (APP) with over 60-fold increase of catalytic efficiency for the latter. Expression of APP and memapsin 2 in HeLa cells showed that memapsin 2 cleaved the β-secretase site of APP intracellularly. These and other results suggest that memapsin 2 fits all of the criteria of β-secretase, which catalyzes the rate-limiting step of the in vivo production of the β-amyloid (Aβ) peptide leading to the progression of Alzheimer's disease. Recombinant memapsin 2 also cleaved a peptide derived from the processing site of presenilin 1, albeit with poor kinetic efficiency. Alignment of cleavage site sequences of peptides indicates that the specificity of memapsin 2 resides mainly at the S1′ subsite, which prefers small side chains such as Ala, Ser, and Asp.
Resumo:
In a recent article [Khan, A. U., Kovacic, D., Kolbanovsky, A., Desai, M., Frenkel, K. & Geacintov, N. E. (2000) Proc. Natl. Acad. Sci. USA 97, 2984–2989], the authors claimed that ONOO−, after protonation to ONOOH, decomposes into 1HNO and 1O2 according to a spin-conserved unimolecular mechanism. This claim was based partially on their observation that nitrosylhemoglobin is formed via the reaction of peroxynitrite with methemoglobin at neutral pH. However, thermochemical considerations show that the yields of 1O2 and 1HNO are about 23 orders of magnitude lower than those of ⋅NO2 and ⋅OH, which are formed via the homolysis of ONOOH. We also show that methemoglobin does not form with peroxynitrite any spectrally detectable product, but with contaminations of nitrite and H2O2 present in the peroxynitrite sample. Thus, there is no need to modify the present view of the mechanism of ONOOH decomposition, according to which initial homolysis into a radical pair, [ONO⋅ ⋅OH]cage, is followed by the diffusion of about 30% of the radicals out of the cage, while the rest recombines to nitric acid in the solvent cage.
Resumo:
Objective: To evaluate the READER model for critical reading by comparing it with a free appraisal, and to explore what factors influence different components of the model.
Resumo:
The reduction in levels of the potentially toxic amyloid-β peptide (Aβ) has emerged as one of the most important therapeutic goals in Alzheimer's disease. Key targets for this goal are factors that affect the expression and processing of the Aβ precursor protein (βAPP). Earlier reports from our laboratory have shown that a novel cholinesterase inhibitor, phenserine, reduces βAPP levels in vivo. Herein, we studied the mechanism of phenserine's actions to define the regulatory elements in βAPP processing. Phenserine treatment resulted in decreased secretion of soluble βAPP and Aβ into the conditioned media of human neuroblastoma cells without cellular toxicity. The regulation of βAPP protein expression by phenserine was posttranscriptional as it suppressed βAPP protein expression without altering βAPP mRNA levels. However, phenserine's action was neither mediated through classical receptor signaling pathways, involving extracellular signal-regulated kinase or phosphatidylinositol 3-kinase activation, nor was it associated with the anticholinesterase activity of the drug. Furthermore, phenserine reduced expression of a chloramphenicol acetyltransferase reporter fused to the 5′-mRNA leader sequence of βAPP without altering expression of a control chloramphenicol acetyltransferase reporter. These studies suggest that phenserine reduces Aβ levels by regulating βAPP translation via the recently described iron regulatory element in the 5′-untranslated region of βAPP mRNA, which has been shown previously to be up-regulated in the presence of interleukin-1. This study identifies an approach for the regulation of βAPP expression that can result in a substantial reduction in the level of Aβ.
Resumo:
The existence of the RNA world, in which RNA acted as a catalyst as well as an informational macromolecule, assumes a large prebiotic source of ribose or the existence of pre-RNA molecules with backbones different from ribose-phosphate. The generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity. Even if there were a selective synthesis of ribose, there is still the problem of stability. Sugars are known to be unstable in strong acid or base, but there are few data for neutral solutions. Therefore, we have measured the rate of decomposition of ribose between pH 4 and pH 8 from 40 degrees C to 120 degrees C. The ribose half-lives are very short (73 min at pH 7.0 and 100 degrees C and 44 years at pH 7.0 and 0 degrees C). The other aldopentoses and aldohexoses have half-lives within an order of magnitude of these values, as do 2-deoxyribose, ribose 5-phosphate, and ribose 2,4-bisphosphate. These results suggest that the backbone of the first genetic material could not have contained ribose or other sugars because of their instability.