9 resultados para Biological applications
em Universidad Politécnica de Madrid
Resumo:
The present study investigates the potential use of non-catalyzed water-soluble blocked polyurethane prepolymer (PUP) as a bifunctional cross-linker for collagenous scaffolds. The effect of concentration (5, 10, 15 and 20%), time (4, 6, 12 and 24 h), medium volume (50, 100, 200 and 300%) and pH (7.4, 8.2, 9 and 10) over stability, microstructure and tensile mechanical behavior of acellular pericardial matrix was studied. The cross-linking index increased up to 81% while the denaturation temperature increased up to 12 °C after PUP crosslinking. PUP-treated scaffold resisted the collagenase degradation (0.167 ± 0.14 mmol/g of liberated amine groups vs. 598 ± 60 mmol/g for non-cross-linked matrix). The collagen fiber network was coated with PUP while viscoelastic properties were altered after cross-linking. The treatment of the pericardial scaffold with PUP allows (i) different densities of cross-linking depending of the process parameters and (ii) tensile properties similar to glutaraldehyde method.
Resumo:
Regionalización de tipos de régimen natural de caudales en la cuenca del Ebro y validación biológica de los tipos de regímen natural.
Resumo:
The training algorithm studied in this paper is inspired by the biological metaplasticity property of neurons. Tested on different multidisciplinary applications, it achieves a more efficient training and improves Artificial Neural Network Performance. The algorithm has been recently proposed for Artificial Neural Networks in general, although for the purpose of discussing its biological plausibility, a Multilayer Perceptron has been used. During the training phase, the artificial metaplasticity multilayer perceptron could be considered a new probabilistic version of the presynaptic rule, as during the training phase the algorithm assigns higher values for updating the weights in the less probable activations than in the ones with higher probability
Resumo:
A high resolution focused beam line has been recently installed on the AIFIRA (“Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine”) facility at CENBG. This nanobeam line, based on a doublet–triplet configuration of Oxford Microbeam Ltd. OM-50™ quadrupoles, offers the opportunity to focus protons, deuterons and alpha particles in the MeV energy range to a sub-micrometer beam spot. The beam optics design has been studied in detail and optimized using detailed ray-tracing simulations and the full mechanical design of the beam line was reported in the Debrecen ICNMTA conference in 2008. During the last two years, the lenses have been carefully aligned and the target chamber has been fully equipped with particle and X-ray detectors, microscopes and precise positioning stages. The beam line is now operational and has been used for its firstapplications to ion beam analysis. Interestingly, this set-up turned out to be a very versatile tool for a wide range of applications. Indeed, even if it was not intended during the design phase, the ion optics configuration offers the opportunity to work either with a high current microbeam (using the triplet only) or with a lower current beam presenting a sub-micrometer resolution (using the doublet–triplet configuration). The performances of the CENBGnanobeam line are presented for both configurations. Quantitative data concerning the beam lateral resolutions at different beam currents are provided. Finally, the firstresults obtained for different types of application are shown, including nuclear reaction analysis at the micrometer scale and the firstresults on biological samples
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others.
Resumo:
n this paper we propose the use of Networks of Bio-inspired Processors (NBP) to model some biological phenomena within a computational framework. In particular, we propose the use of an extension of NBP named Network Evolutionary Processors Transducers to simulate chemical transformations of substances. Within a biological process, chemical transformations of substances are basic operations in the change of the state of the cell. Previously, it has been proved that NBP are computationally complete, that is, they are able to solve NP complete problems in linear time, using massively parallel computations. In addition, we propose a multilayer architecture that will allow us to design models of biological processes related to cellular communication as well as their implications in the metabolic pathways. Subsequently, these models can be applied not only to biological-cellular instances but, possibly, also to configure instances of interactive processes in many other fields like population interactions, ecological trophic networks, in dustrial ecosystems, etc.
Resumo:
Recently, a novel method to trap and pattern ensembles of nanoparticles has been proposed and tested. It relies on the photovoltaic (PV) properties of certain ferroelectric crystals such as LiNbO3 [1,2]. These crystals, when suitably doped, develop very high electric fields in response to illumination with light of suitable wavelength. The PV effect lies in the asymmetrical excitation of electrons giving rise to PV currents and associated space-charge fields (photorefractive effect). The field generated in the bulk of the sample propagates to the surrounding medium as evanescent fields. When dielectric or metal nanoparticles are deposited on the surface of the sample the evanescent fields give rise to either electrophoretic or dielectrophoretic forces, depending on the charge state of the particles, that induce the trapping and patterning effects [3,4]. The purpose of this work has been to explore the effects of such PV fields in the biology and biomedical areas. A first work was able to show the necrotic effects induced by such fields on He-La tumour cells grown on the surface of an illuminated iron-doped LiNbO3 crystal [5]. In principle, it is conceived that LiNbO3 nanoparticles may be advantageously used for such biomedical purposes considering the possibility of such nanoparticles being incorporated into the cells. Previous experiments using microparticles have been performed [5] with similar results to those achieved with the substrate. Therefore, the purpose of this work has been to fabricate and characterize the LiNbO3 nanoparticles and assess their necrotic effects when they are incorporated on a culture of tumour cells. Two different preparation methods have been used: 1) mechanical grinding from crystals, and 2) bottom-up sol-gel chemical synthesis from metal-ethoxide precursors. This later method leads to a more uniform size distribution of smaller particles (down to around 50 nm). Fig. 1(a) and 1(b) shows SEM images of the nanoparticles obtained with both method. An ad hoc software taking into account the physical properties of the crystal, particullarly donor and aceptor concentrations has been developped in order to estimate the electric field generated in noparticles. In a first stage simulations of the electric current of nanoparticles, in a conductive media, due to the PV effect have been carried out by MonteCarlo simulations using the Kutharev 1-centre transport model equations [6] . Special attention has been paid to the dependence on particle size and [Fe2+]/[Fe3+]. First results on cubic particles shows large dispersion for small sizes due to the random number of donors and its effective concentration (Fig 2). The necrotic (toxicity) effect of nanoparticles incorporated into a tumour cell culture subjected to 30 min. illumination with a blue LED is shown in Fig.3. For each type of nanoparticle the percent of cell survival in dark and illumination conditions has been plot as a function of the particle dilution factor. Fig. 1a corresponds to mechanical grinding particles whereas 1b and 1c refer to chemically synthesized particles with two oxidation states. The light effect is larger with mechanical grinding nanoparticles, but dark toxicity is also higher. For chemically synthesized nanoparticles dark toxicity is low but only in oxidized samples, where the PV effect is known to be larger, the light effect is appreciable. These preliminary results demonstrate that Fe:LiNbO· nanoparticles have a biological damaging effect on cells, although there are many points that should be clarified and much space for PV nanoparticles optimization. In particular, it appears necessary to determine the fraction of nanoparticles that become incorporated into the cells and the possible existence of threshold size effects. This work has been supported by MINECO under grant MAT2011-28379-C03.
Resumo:
In a large number of physical, biological and environmental processes interfaces with high irregular geometry appear separating media (phases) in which the heterogeneity of constituents is present. In this work the quantification of the interplay between irregular structures and surrounding heterogeneous distributions in the plane is made For a geometric set image and a mass distribution (measure) image supported in image, being image, the mass image gives account of the interplay between the geometric structure and the surrounding distribution. A computation method is developed for the estimation and corresponding scaling analysis of image, being image a fractal plane set of Minkowski dimension image and image a multifractal measure produced by random multiplicative cascades. The method is applied to natural and mathematical fractal structures in order to study the influence of both, the irregularity of the geometric structure and the heterogeneity of the distribution, in the scaling of image. Applications to the analysis and modeling of interplay of phases in environmental scenarios are given.