954 resultados para Spherical elastic shells
Resumo:
Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.
Resumo:
Este trabajo analiza distintas inestabilidades en estructuras formadas por distintos materiales. En particular, se capturan y se modelan las inestabilidades usando el método de Riks. Inicialmente, se analiza la bifurcación en depósitos cilíndricos formados por material anisótropo sometidos a carga axial y presión interna. El análisis de bifurcación y post-bifurcación asociados con cilindros de pared gruesa se formula para un material incompresible reforzado con dos fibras que son mecánicamente equivalentes y están dispuestas simétricamente. Consideramos dos casos en la naturaleza de la anisotropía: (i) Fibras refuerzo que tienen una influencia particular sobre la respuesta a cortante del material y (ii) Fibras refuerzo que influyen sólo si la fibra cambia de longitud con la deformación. Se analiza la propagación de las inestabilidades. En concreto, se diferencia en el abultamiento (bulging) entre la propagación axial y la propagación radial de la inestabilidad. Distintos modelos sufren una u otra propagación. Por último, distintas inestabilidades asociadas al mecanismo de ablandamiento del material (material softening) en contraposición al de endurecimiento (hardening) en una estructura (viga) de a: hormigón y b: hormigón reforzado son modeladas utilizando una metodología paralela a la desarrollada en el análisis de inestabilidades en tubos sometidos a presión interna. This present work deals with the instability of structures made of various materials. It captures and models different types of instabilities using numerical analysis. Firstly, we consider bifurcation for anisotropic cylindrical shells subject to axial loading and internal pressure. Analysis of bifurcation and post bifurcation of inflated hyperelastic thick-walled cylinder is formulated using a numerical procedure based on the modified Riks method for an incompressible material with two preferred directions which are mechanically equivalent and are symmetrically disposed. Secondly, bulging/necking motion in doubly fiber-reinforced incompressible nonlinearly elastic cylindrical shells is captured and we consider two cases for the nature of the anisotropy: (i) reinforcing models that have a particular influence on the shear response of the material and (ii) reinforcing models that depend only on the stretch in the fiber direction. The different instability motions are considered. Axial propagation of the bulging instability mode in thin-walled cylinders under inflation is analyzed. We present the analytical solution for this particular motion as well as for radial expansion during bulging evolution. For illustration, cylinders that are made of either isotropic incompressible non-linearly elastic materials or doubly fiber reinforced incompressible non-linearly elastic materials are considered. Finally, strain-softening constitutive models are considered to analyze two concrete structures: a reinforced concrete beam and an unreinforced notch beam. The bifurcation point is captured using the Riks method used previously to analyze bifurcation of a pressurized cylinder.
Resumo:
Se analiza la racemización de aminoácidos en proteínas inter e intracristalinas en conchas de Patella y su utilización como herramienta geocronológica, fundamentalmente empleadas en yacimientos arqueológicos.The inter- and intra-crystalline fractions of Patella vulgata limpets recovered from archaeological sites in Northern Spain (covering Neolithic, Mesolithic, Magdalenian, Solutrean, and Aurignacian periods) were examined for amino acid composition and racemisation over time. The calcitic apex and rim areas of the shells were found to probably be composed of similar proteins, as the D/L values and amino acids were comparable and varied in the same way with increasing age; however, the mineral structures present in these areas differed. The aragonitic intermediate part of the shell showed a distinctly different amino acid composition and mineral structure. The main protein leaching from the inter-crystalline fraction occurred within the first 6000 yr after the death of the organism. In contrast, the intra-crystalline fraction — comprised of a different protein composition than the inter-crystalline fraction — appeared to behave as a closed system for at least 34 ka, as reflected by the lack of a significant decrease in the amino acid content; however, changes in the amino acid percentages occurred during this period. The concentration of aspartic acid remained almost constant with age both in inter- and intra-crystalline proteins, and its contribution to the total amino acid content increased with age at the expense of other amino acids such as glutamic acid, serine, glycine and alanine. Temperature is thought to play a key role in the amino acid racemisation of P. vulgata and could explain why in the localities belonging to the Gravettian and Solutrean period, which formed during relatively cold conditions, D/L values were similar to those detected in shells from sites formed during the Magdalenian.
Resumo:
This paper presents a Levy-type solution for the natural frequencies of translational shells. A computer program in FORTRAN IV language corresponding to this solution is described. This direct solution is compared with some indirect solutions utilising Galerkin and Rayleigh methods. An extension to the study of forced vibrations is outlined.
Resumo:
La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
The mechanical behavior of living murine T-lymphocytes was assessed by atomic force microscopy (AFM). A robust experimental procedure was developed to overcome some features of lymphocytes, in particular their spherical shape and non-adherent character. The procedure included the immobilization of the lymphocytes on amine-functionalized substrates, the use of hydrodynamic effects on the deflection of the AFM cantilever to monitor the approaching, and the use of the jumping mode for obtaining the images. Indentation curves were analyzed according to Hertz's model for contact mechanics. The calculated values of the elastic modulus are consistent both when considering the results obtained from a single lymphocyte and when comparing the curves recorded from cells of different specimens
Resumo:
The structure of complexes made from DNA and suitable lipids (lipoplex, Lx) was examined by cryo-electron microscopy (cryoEM). We observed a distinct concentric ring-like pattern with striated shells when using plasmid DNA. These spherical multilamellar particles have a mean diameter of 254 nm with repetitive spacing of 7.5 nm with striation of 5.3 nm width. Small angle x-ray scattering revealed repetitive ordering of 6.9 nm, suggesting a lamellar structure containing at least 12 layers. This concentric and lamellar structure with different packing regimes also was observed by cryoEM when using linear double-stranded DNA, single-stranded DNA, and oligodeoxynucleotides. DNA chains could be visualized in DNA/lipid complexes. Such specific supramolecular organization is the result of thermodynamic forces, which cause compaction to occur through concentric winding of DNA in a liquid crystalline phase. CryoEM examination of T4 phage DNA packed either in T4 capsides or in lipidic particles showed similar patterns. Small angle x-ray scattering suggested an hexagonal phase in Lx-T4 DNA. Our results indicate that both lamellar and hexagonal phases may coexist in the same Lx preparation or particle and that transition between both phases may depend on equilibrium influenced by type and length of the DNA used.
Resumo:
Elastic fibers consist of two morphologically distinct components: elastin and 10-nm fibrillin-containing microfibrils. During development, the microfibrils form bundles that appear to act as a scaffold for the deposition, orientation, and assembly of tropoelastin monomers into an insoluble elastic fiber. Although microfibrils can assemble independent of elastin, tropoelastin monomers do not assemble without the presence of microfibrils. In the present study, immortalized ciliary body pigmented epithelial (PE) cells were investigated for their potential to serve as a cell culture model for elastic fiber assembly. Northern analysis showed that the PE cells express microfibril proteins but do not express tropoelastin. Immunofluorescence staining and electron microscopy confirmed that the microfibril proteins produced by the PE cells assemble into intact microfibrils. When the PE cells were transfected with a mammalian expression vector containing a bovine tropoelastin cDNA, the cells were found to express and secrete tropoelastin. Immunofluorescence and electron microscopic examination of the transfected PE cells showed the presence of elastic fibers in the matrix. Biochemical analysis of this matrix showed the presence of cross-links that are unique to mature insoluble elastin. Together, these results indicate that the PE cells provide a unique, stable in vitro system in which to study elastic fiber assembly.
Resumo:
Plants change size by deforming reversibly (elastically) whenever turgor pressure changes, and by growing. The elastic deformation is independent of growth because it occurs in nongrowing cells. Its occurrence with growth has prevented growth from being observed alone. We investigated whether the two processes could be separated in internode cells of Chara corallina Klien ex Willd., em R.D.W. by injecting or removing cell solution with a pressure probe to change turgor while the cell length was continuously measured. Cell size changed immediately when turgor changed, and growth rates appeared to be altered. Low temperature eliminated growth but did not alter the elastic effects. This allowed elastic deformation measured at low temperature to be subtracted from elongation at warm temperature in the same cell. After the subtraction, growth alone could be observed for the first time. Alterations in turgor caused growth to change rapidly to a new, steady rate with no evidence of rapid adjustments in wall properties. This turgor response, together with the marked sensitivity of growth to temperature, suggested that the growth rate was not controlled by inert polymer extension but rather by biochemical reactions that include a turgor-sensitive step.
Resumo:
We report absolute experimental integral cross sections (ICSs) for electron impact excitation of bands of electronic-states in furfural, for incident electron energies in the range 20-250 eV. Wherever possible, those results are compared to corresponding excitation cross sections in the structurally similar species furan, as previously reported by da Costa et al. [Phys. Rev. A 85, 062706 (2012)] and Regeta and Allan [Phys. Rev. A 91, 012707 (2015)]. Generally, very good agreement is found. In addition, ICSs calculated with our independent atom model (IAM) with screening corrected additivity rule (SCAR) formalism, extended to account for interference (I) terms that arise due to the multi-centre nature of the scattering problem, are also reported. The sum of those ICSs gives the IAM-SCAR+I total cross section for electron-furfural scattering. Where possible, those calculated IAM-SCAR+I ICS results are compared against corresponding results from the present measurements with an acceptable level of accord being obtained. Similarly, but only for the band I and band II excited electronic states, we also present results from our Schwinger multichannel method with pseudopotentials calculations. Those results are found to be in good qualitative accord with the present experimental ICSs. Finally, with a view to assembling a complete cross section data base for furfural, some binary-encounter-Bethe-level total ionization cross sections for this collision system are presented. (C) 2016 AIP Publishing LLC.
Resumo:
We investigate both experimentally and theoretically the evolution of conductance in metallic one-atom contacts under elastic deformation. While simple metals like Au exhibit almost constant conductance plateaus, Al and Pb show inclined plateaus with positive and negative slopes. It is shown how these behaviors can be understood in terms of the orbital structure of the atoms forming the contact. This analysis provides further insight into the issue of conductance quantization in metallic contacts revealing important aspects of their atomic and electronic structures.
Resumo:
The San Julián’s stone is the main material used to build the most important historical buildings in Alicante city (Spain). This paper describes the analysis developed to obtain the relationship between the static and the dynamic modulus of this sedimentary rock heated at different temperatures. The rock specimens have been subjected to heating processes at different temperatures to produce different levels of weathering on 24 specimens. The static and dynamic modulus has been measured for every specimen by means of the ISRM standard and ultrasonic tests, respectively. Finally, two analytic formulas are proposed for the relationship between the static and the dynamic modulus for this stone. The results have been compared with some relationships proposed by different researchers for other types of rock. The expressions presented in this paper can be useful for the analysis, using non-destructive techniques, of the integrity level of historical constructions built with San Julián’s stone affected by fires.
Resumo:
This paper complements a previous one [1] about toluene adsorption on a commercial spherical activated carbon and on samples obtained from it by CO2 or steam activation. The present paper deals with the activation of a commercial spherical carbon (SC) having low porosity and high bed density (0.85 g/cm3) using the same procedure. Our results show that SC can be well activated with CO2 or steam. The increase in the burn-off percentage leads to an increase in the gravimetric adsorption capacity (more intensively for CO2) and a decrease in bed density (more intensively for CO2). However, for similar porosity developments similar bed densities are achieved for CO2 and steam. Especial attention is paid to differences between both activating agents, comparing samples having similar or different activation rates, showing that CO2 generates more narrow porosity and penetrates more inside the spherical particles than steam. Steam activates more from the outside to the interior of the spheres and hence produces larger spheres size reductions. With both activation agents and with a suitable combination of porosity development and bed density, quite high volumetric adsorption values of toluene (up to 236 g toluene/L) can be obtained even using a low toluene concentration (200 ppmv).
Resumo:
Titanium dioxide nanoparticles prepared in situ by sol–gel method were supported on a spherical activated carbon to prepare TiO2/AC hybrid photocatalysts for the oxidation of gaseous organic compounds. Additionally, a granular activated carbon was studied for comparison purposes. In both types of TiO2/AC composites the effect of different variables (i.e., the thermal treatment conditions used during the preparation of these materials) and the UV-light wavelength used during photocatalytic oxidation were analyzed. The prepared materials were deeply characterized (by gas adsorption, TGA, XRD, SEM and photocatalytic propene oxidation). The obtained results show that the carbon support has an important effect on the properties of the deposited TiO2 and, therefore, on the photocatalytic activity of the resulting TiO2/AC composites. Thus, the hybrid materials prepared over the spherical activated carbon show better results than those prepared over the granular one; a good TiO2 coverage with a high crystallinity of the deposited titanium dioxide, which just needs an air oxidation treatment at low-moderate temperature (350–375 °C) to present high photoactivity, without the need of additional inert atmosphere treatments. Additionally, these materials are more active at 365 nm than at 257.7 nm UV radiation, opening the possibility of using solar light for this application.