983 resultados para ATOMIC DISPLACEMENTS
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
An efficient approach for the simulation of ion scattering from solids is proposed. For every encountered atom, we take multiple samples of its thermal displacements among those which result in scattering with high probability to finally reach the detector. As a result, the detector is illuminated by intensive “showers,” where each event of detection must be weighted according to the actual probability of the atom displacement. The computational cost of such simulation is orders of magnitude lower than in the direct approach, and a comprehensive analysis of multiple and plural scattering effects becomes possible. We use this method for two purposes. First, the accuracy of the approximate approaches, developed mainly for ion-beam structural analysis, is verified. Second, the possibility to reproduce a wide class of experimental conditions is used to analyze some basic features of ion-solid collisions: the role of double violent collisions in low-energy ion scattering; the origin of the “surface peak” in scattering from amorphous samples; the low-energy tail in the energy spectra of scattered medium-energy ions due to plural scattering; and the degradation of blocking patterns in two-dimensional angular distributions with increasing depth of scattering. As an example of simulation for ions of MeV energies, we verify the time reversibility for channeling and blocking of 1-MeV protons in a W crystal. The possibilities of analysis that our approach offers may be very useful for various applications, in particular, for structural analysis with atomic resolution.
Resumo:
The Kariba dam is undergoing concrete expansion as a result of an alkali-aggregate reaction. The model adopted to simulate the process is explained in the paper; it is based on the model first proposed by Ulm et al, as later modified by Saouma and Perotti. It has been implemented in the commercial finite element code Abaqus and applied to solve the benchmark problem. The parameters of the model were calibrated using the data recorded up to 1995. The calibrated model was then used for predicting the evolution of the dam up to the present date. Apart from this prediction the paper offers a number of conclusions, such as the fact that the stress level appears to have a major influence on the expansion process; and it presents some suggestions to improve the formulation of the benchmark, such as providing temperature data and widening the locations and conditions of the data employed in the calibration
Resumo:
The Kariba dam is undergoing concrete expansion as a result of an alkali-aggregate reaction. The model adopted to simulate the process is explained in the paper; it is based on the model first proposed by Ulm et al, as later modified by Saouma and Perotti. It has been implemented in the commercial finite element code Abaqus and applied to solve the benchmark problem. The parameters of the model were calibrated using the data recorded up to 1995. The calibrated model was then used for predicting the evolution of the dam up to the present date. Apart from this prediction the paper offers a number of conclusions, such as the fact that the stress level appears to have a major influence on the expansion process; and it presents some suggestions to improve the formulation of the benchmark, such as providing temperature data and widening the locations and conditions of the data employed in the calibration
Resumo:
We present two concurrent semantics (i.e. semantics where concurrency is explicitely represented) for CC programs with atomic tells. One is based on simple partial orders of computation steps, while the other one is based on contextual nets and it is an extensión of a previous one for eventual CC programs. Both such semantics allow us to derive concurrency, dependency, and nondeterminism information for the considered languages. We prove some properties about the relation between the two semantics, and also about the relation between them and the operational semantics. Moreover, we discuss how to use the contextual net semantics in the context of CLP programs. More precisely, by interpreting concurrency as possible parallelism, our semantics can be useful for a safe parallelization of some CLP computation steps. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it for the task of scheduling CC programs. Moreover, our semantics is also suitable for CC programs with a new kind of atomic tell (called locally atomic tell), which checks for consistency only the constraints it depends on. Such a tell achieves a reasonable trade-off between efficiency and atomicity, since the checked constraints can be stored in a local memory and are thus easily accessible even in a distributed implementation.
Resumo:
We present a concurrent semantics (i.e. a semantics where concurrency is explicitely represented) for CC programs with atomic tells. This allows to derive concurrency, dependency, and nondeterminism information for such languages. The ability to treat failure information puts CLP programs also in the range of applicability of our semantics: although such programs are not concurrent, the concurrency information derived in the semantics may be interpreted as possible parallelism, thus allowing to safely parallelize those computation steps which appear to be concurrent in the net. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it to schedule CC programs. The fact that the semantical structure contains dependency information suggests a new tell operation, which checks for consistency only the constraints it depends on, achieving a reasonable trade-off between efficiency and atomicity.
Resumo:
The electronic structure of modified chalcopyrite CuInS2 has been analyzed from first principles within the density functional theory. The host chalcopyrite has been modified by introducing atomic impurities M at substitutional sites in the lattice host with M = C, Si, Ge, Sn, Ti, V, Cr, Fe, Co, Ni, Rh, and Ir. Both substitutions M for In and M for Cu have been analyzed. The gap and ionization energies are obtained as a function of the M-S displacements. It is interesting for both spintronic and optoelectronic applications because it can provide significant information with respect to the pressure effect and the nonradiative recombination.
Resumo:
The use of GaAsSbN capping layers on InAs/GaAs quantum dots (QDs) has recently been proposed for micro- and optoelectronic applications for their ability to independently tailor electron and hole confinement potentials. However, there is a lack of knowledge about the structural and compositional changes associated with the process of simultaneous Sb and N incorporation. In the present work, we have characterized using transmission electron microscopy techniques the effects of adding N in the GaAsSb/InAs/GaAs QD system. Firstly, strain maps of the regions away from the InAs QDs had revealed a huge reduction of the strain fields with the N incorporation but a higher inhomogeneity, which points to a composition modulation enhancement with the presence of Sb-rich and Sb-poor regions in the range of a few nanometers. On the other hand, the average strain in the QDs and surroundings is also similar in both cases. It could be explained by the accumulation of Sb above the QDs, compensating the tensile strain induced by the N incorporation together with an In-Ga intermixing inhibition. Indeed, compositional maps of column resolution from aberration-corrected Z-contrast images confirmed that the addition of N enhances the preferential deposition of Sb above the InAs QD, giving rise to an undulation of the growth front. As an outcome, the strong redshift in the photoluminescence spectrum of the GaAsSbN sample cannot be attributed only to the N-related reduction of the conduction band offset but also to an enhancement of the effect of Sb on the QD band structure.
Resumo:
The atomic environments of Fe and Co involved in the magnetostriction effect in FeCoB alloys have been identified by differential extended x-ray fine structure (DiffEXAFS) spectroscopy. The study, done in amorphous and polycrystalline FeCoB films, demonstrates that the alloys are heterogeneous and that boron plays a crucial role in the origin of their magnetostrictive properties. The analysis of DiffEXAFS in the polycrystalline and amorphous alloys indicates that boron activates magnetostriction when entering as an impurity into octahedral interstitial sites of the Fe bcc lattice, causing its tetragonal distortion. Magnetostriction would be explained then by the relative change in volume when the tetragonal axis of the site is reoriented under an externally applied magnetic field. The experiment demonstrates the extreme sensitivity of DiffEXAFS to characterize magnetostrictive environments that are undetectable in their related EXAFS spectra.
Resumo:
The Atomic Physics Group at the Institute of Nuclear Fusion (DENIM) in Spain has accumulated experience over the years in developing a collection of computational models and tools for determining some relevant microscopic properties of, mainly, ICF and laser-produced plasmas in a variety of conditions. In this work several applications of those models in determining some relevant microscopic properties are presented.
Resumo:
Experimental diffusion data were critically assessed to develop the atomic mobility for the bcc phase of the Ti–Al–Fe system by using the DICTRA software. Good agreements were obtained from comprehensive comparisons made between the calculated and the experimental diffusion coefficients. The developed atomic mobility was then validated by well predicting the interdiffusion behavior observed from the diffusion-couple experiments in available literature.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
We have applied in situ atomic force microscopy to directly observe the aggregation of Alzheimer’s β-amyloid peptide (Aβ) in contact with two model solid surfaces: hydrophilic mica and hydrophobic graphite. The time course of aggregation was followed by continuous imaging of surfaces remaining in contact with 10–500 μM solutions of Aβ in PBS (pH 7.4). Visualization of fragile nanoscale aggregates of Aβ was made possible by the application of a tapping mode of imaging, which minimizes the lateral forces between the probe tip and the sample. The size and the shape of Aβ aggregates, as well as the kinetics of their formation, exhibited pronounced dependence on the physicochemical nature of the surface. On hydrophilic mica, Aβ formed particulate, pseudomicellar aggregates, which at higher Aβ concentration had the tendency to form linear assemblies, reminiscent of protofibrillar species described recently in the literature. In contrast, on hydrophobic graphite Aβ formed uniform, elongated sheets. The dimensions of those sheets were consistent with the dimensions of β-sheets with extended peptide chains perpendicular to the long axis of the aggregate. The sheets of Aβ were oriented along three directions at 120° to each other, resembling the crystallographic symmetry of a graphite surface. Such substrate-templated self-assembly may be the distinguishing feature of β-sheets in comparison with α-helices. These studies show that in situ atomic force microscopy enables direct assessment of amyloid aggregation in physiological fluids and suggest that Aβ fibril formation may be driven by interactions at the interface of aqueous solutions and hydrophobic substrates, as occurs in membranes and lipoprotein particles in vivo.
Resumo:
Yeast centromeric DNA (CEN DNA) binding factor 3 (CBF3) is a multisubunit protein complex that binds to the essential CDEIII element in CEN DNA. The four CBF3 proteins are required for accurate chromosome segregation and are considered to be core components of the yeast kinetochore. We have examined the structure of the CBF3–CEN DNA complex by atomic force microscopy. Assembly of CBF3–CEN DNA complexes was performed by combining purified CBF3 proteins with a DNA fragment that includes the CEN region from yeast chromosome III. Atomic force microscopy images showed DNA molecules with attached globular bodies. The contour length of the DNA containing the complex is ≈9% shorter than the DNA alone, suggesting some winding of DNA within the complex. The measured location of the single binding site indicates that the complex is located asymmetrically to the right of CDEIII extending away from CDEI and CDEII, which is consistent with previous data. The CEN DNA is bent ≈55° at the site of complex formation. A significant fraction of the complexes are linked in pairs, showing three to four DNA arms, with molecular volumes approximately three times the mean volumes of two-armed complexes. These multi-armed complexes indicate that CBF3 can bind two DNA molecules together in vitro and, thus, may be involved in holding together chromatid pairs during mitosis.
Resumo:
Leukocytes roll along the endothelium of postcapillary venules in response to inflammatory signals. Rolling under the hydrodynamic drag forces of blood flow is mediated by the interaction between selectins and their ligands across the leukocyte and endothelial cell surfaces. Here we present force-spectroscopy experiments on single complexes of P-selectin and P-selectin glycoprotein ligand-1 by atomic force microscopy to determine the intrinsic molecular properties of this dynamic adhesion process. By modeling intermolecular and intramolecular forces as well as the adhesion probability in atomic force microscopy experiments we gain information on rupture forces, elasticity, and kinetics of the P-selectin/P-selectin glycoprotein ligand-1 interaction. The complexes are able to withstand forces up to 165 pN and show a chain-like elasticity with a molecular spring constant of 5.3 pN nm−1 and a persistence length of 0.35 nm. The dissociation constant (off-rate) varies over three orders of magnitude from 0.02 s−1 under zero force up to 15 s−1 under external applied forces. Rupture force and lifetime of the complexes are not constant, but directly depend on the applied force per unit time, which is a product of the intrinsic molecular elasticity and the external pulling velocity. The high strength of binding combined with force-dependent rate constants and high molecular elasticity are tailored to support physiological leukocyte rolling.