531 resultados para Discontinuity


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La discontinuidad de Mohorovičić, más conocida simplemente como “Moho” constituye la superficie de separación entre los materiales rocosos menos densos de la corteza y los materiales rocosos más densos del manto, suponiendo estas capas de densidad constante del orden de 2.67 y 3.27 g/cm3, y es un contorno básico para cualquier estudio geofísico de la corteza terrestre. Los estudios sísmicos y gravimétricos realizados demuestran que la profundidad del Moho es del orden de 30-40 km por debajo de la Península Ibérica y 5-15 km bajo las zonas marinas. Además las distintas técnicas existentes muestran gran correlación en los resultados. Haciendo la suposición de que el campo de gravedad de la Península Ibérica (como le ocurre al 90% de la Tierra) está isostáticamente compensado por la variable profundidad del Moho, suponiendo un contraste de densidad constante entre la corteza y el manto y siguiendo el modelo isostático de Vening Meinesz (1931), se formula el problema isostático inverso para obtener tal profundidad a partir de la anomalía Bouguer de la gravedad calculada gracias a la gravedad observada en la superficie terrestre. La particularidad de este modelo es la compensación isostática regional de la que parte la teoría, que se asemeja a la realidad en mayor medida que otros modelos existentes, como el de Airy-Heiskanen, que ha sido históricamente el más utilizado en trabajos semejantes. Además, su solución está relacionada con el campo de gravedad global para toda la Tierra, por lo que los actuales modelos gravitacionales, la mayoría derivados de observaciones satelitales, deberían ser importantes fuentes de información para nuestra solución. El objetivo de esta tesis es el estudio con detalle de este método, desarrollado por Helmut Moritz en 1990, que desde entonces ha tenido poca evolución y seguidores y que nunca se ha puesto en práctica en la Península Ibérica. Después de tratar su teoría, desarrollo y aspectos computacionales, se está en posición de obtener un modelo digital del Moho para esta zona a fin de poder utilizarse para el estudio de la distribución de masas bajo la superficie terrestre. A partir de los datos del Moho obtenidos por métodos alternativos se hará una comparación. La precisión de ninguno de estos métodos es extremadamente alta (+5 km aproximadamente). No obstante, en aquellas zonas donde exista una discrepancia de datos significaría un área descompensada, con posibles movimientos tectónicos o alto grado de riesgo sísmico, lo que le da a este estudio un valor añadido. ABSTRACT The Mohorovičić discontinuity, simply known as “Moho” constitutes the division between the rocky and less thick materials of the mantle and the heavier ones in the crust, assuming densities of the orders of 2.67 y 3.27 g/cm3 respectively. It is also a basic contour for every geophysical kind of studies about the terrestrial crust. The seismic and previous gravimetric observations done in the study area show that the Moho depth is of the order of 30-40 km beneath the ground and 5-15 km under the ocean basin. Besides, the different techniques show a good correlation in their results. Assuming that the Iberian Peninsula gravity field (as it happens for the 90% of the Earth) is isostatically compensated according to the variable Moho depth, supposing a constant density contrast between crust and mantle, and following the isostatic Vening Meinesz model (1931), the inverse isostatic problem can be formulated from Bouguer gravity anomaly data obtained thanks to the observed gravity at the surface of the Earth. The main difference between this model and other existing ones, such as Airy- Heiskanen’s (pure local compensation and mostly used in these kinds of works) is the approaching to a regional isostatic compensation, much more in accordance with reality. Besides, its solution is related to the global gravity field, and the current gravitational models -mostly satellite derived- should be important data sources in such solution. The aim of this thesis is to study with detail this method, developed by Helmut Moritz in 1990, which hardly ever has it put into practice. Moreover, it has never been used in Iberia. After studying its theory, development and computational aspects, we are able to get a Digital Moho Model of the Iberian Peninsula, in order to study the masses distribution beneath the Earth’s surface. With the depth Moho information obtained from alternative methods, a comparison will be done. Both methods give results with the same order of accuracy, which is not quite high (+ 5 km approximately). Nevertheless, the areas in which a higher difference is observed would mean a disturbance of the compensation, which could show an unbalanced area with possible tectonic movements or potential seismic risk. It will give us an important additive value, which could be used in, at first, non related fields, such as density discrepancies or natural disasters contingency plans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter, we are going to describe the main features as well as the basic steps of the Boundary Element Method (BEM) as applied to elastostatic problems and to compare them with other numerical procedures. As we shall show, it is easy to appreciate the adventages of the BEM, but it is also advisable to refrain from a possible unrestrained enthusiasm, as there are also limitations to its usefulness in certain types of problems. The number of these problems, nevertheless, is sufficient to justify the interest and activity that the new procedure has aroused among researchers all over the world. Briefly speaking, the most frequently used version of the BEM as applied to elastostatics works with the fundamental solution, i.e. the singular solution of the governing equations, as an influence function and tries to satisfy the boundary conditions of the problem with the aid of a discretization scheme which consists exclusively of boundary elements. As in other numerical methods, the BEM was developed thanks to the computational possibilities offered by modern computers on totally "classical" basis. That is, the theoretical grounds are based on linear elasticity theory, incorporated long ago into the curricula of most engineering schools. Its delay in gaining popularity is probably due to the enormous momentum with which Finite Element Method (FEM) penetrated the professional and academic media. Nevertheless, the fact that these methods were developed before the BEM has been beneficial because de BEM successfully uses those results and techniques studied in past decades. Some authors even consider the BEM as a particular case of the FEM while others view both methods as special cases of the general weighted residual technique. The first paper usually cited in connection with the BEM as applied to elastostatics is that of Rizzo, even though the works of Jaswon et al., Massonet and Oliveira were published at about the same time, the reason probably being the attractiveness of the "direct" approach over the "indirect" one. The work of Tizzo and the subssequent work of Cruse initiated a fruitful period with applicatons of the direct BEM to problems of elastostacs, elastodynamics, fracture, etc. The next key contribution was that of Lachat and Watson incorporating all the FEM discretization philosophy in what is sometimes called the "second BEM generation". This has no doubt, led directly to the current developments. Among the various researchers who worked on elastostatics by employing the direct BEM, one can additionallly mention Rizzo and Shippy, Cruse et al., Lachat and Watson, Alarcón et al., Brebbia el al, Howell and Doyle, Kuhn and Möhrmann and Patterson and Sheikh, and among those who used the indirect BEM, one can additionally mention Benjumea and Sikarskie, Butterfield, Banerjee et al., Niwa et al., and Altiero and Gavazza. An interesting version of the indirct method, called the Displacement Discontinuity Method (DDM) has been developed by Crounh. A comprehensive study on various special aspects of the elastostatic BEM has been done by Heisse, while review-type articles on the subject have been reported by Watson and Hartmann. At the present time, the method is well established and is being used for the solution of variety of problems in engineering mechanics. Numerous introductory and advanced books have been published as well as research-orientated ones. In this sense, it is worth noting the series of conferences promoted by Brebbia since 1978, wich have provoked a continuous research effort all over the world in relation to the BEM. In the following sections, we shall concentrate on developing the direct BEM as applied to elastostatics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a numerical implementation of the cohesive crack model for the anal-ysis of quasibrittle materials based on the strong discontinuity approach in the framework of the finite element method. A simple central force model is used for the stress versus crack opening curve. The additional degrees of freedom defining the crack opening are determined at the crack level, thus avoiding the need for performing a static condensation at the element level. The need for a tracking algorithm is avoided by using a consistent pro-cedure for the selection of the separated nodes. Such a model is then implemented into a commercial program by means of a user subroutine, consequently being contrasted with the experimental results. The model takes into account the anisotropy of the material. Numerical simulations of well-known experiments are presented to show the ability of the proposed model to simulate the fracture of quasibrittle materials such as mortar, concrete and masonry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artículo estudia el proceso de fisuración del hormigón por corrosión de la armadura. Se presenta un modelo de transporte de cloruros en el hormigón, que contempla la no-linealidad de los coeficientes de difusión, las isotermas de absorción y el fenómeno de convección. A partir de los resultados de penetración de cloruros, se establece la corrosión de la armadura con la consiguiente expansión radial. La fisuración del hormigón se estudia con un modelo de fisura embebida. Los dos modelos (iniciación y propagación) se incorporan en un programa de elementos finitos. El modelo se contrasta con resultados experimentales, obteniéndose un buen ajuste. Una de las dificultades es establecer el umbral de concentración de cloruros que da lugar al inicio de la corrosión de la armadura.This paper is focused on the chloride-induced corrosion of the rebar in RC. A comprehensive model for the chloride ingress into concrete is presented, with special attention to non-linear diffusion coefficients, chloride binding isotherms and convection phenomena. Based on the results of chloride diffusion, subsequent active corrosion is assumed and the radial expansion of the corroded reinforcement reproduced. For cracking simulation, the Strong Discontinuity Approach is applied. Both models (initiation and propagation corrosion stages) are incorporated in the same finite element program and chained. Comparisons with experimental results are carried out, with reasonably good agreements being obtained, especially for cracking patterns. Major limitations refer to difficulties to establish precise levels of basic data such as the chloride ion content at concrete surface, the chloride threshold concentration that triggers active corrosion, the rate of oxide production or the rust mechanical properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artículo estudia el proceso de fisuración del hormigón por corrosión de la armadura. Se presenta un modelo de transporte de cloruros en el hormigón, que contempla la no-linealidad de los coeficientes de difusión, las isotermas de absorción y el fenómeno de convección. A partir de los resultados de penetración de cloruros, se establece la corrosión de la armadura con la consiguiente expansión radial. La fisuración del hormigón se estudia con un modelo de fisura embebida. Los dos modelos (iniciación y propagación) se incorporan en un programa de elementos finitos. El modelo se contrasta con resultados experimentales, obteniéndose un buen ajuste. Una de las dificultades es establecer el umbral de concentración de cloruros que da lugar al inicio de la corrosión de la armadura.This paper is focused on the chloride-induced corrosion of the rebar in RC. A comprehensive model for the chloride ingress into concrete is presented, with special attention to non-linear diffusion coefficients, chloride binding isotherms and convection phenomena. Based on the results of chloride diffusion, subsequent active corrosion is assumed and the radial expansion of the corroded reinforcement reproduced. For cracking simulation, the Strong Discontinuity Approach is applied. Both models (initiation and propagation corrosion stages) are incorporated in the same finite element program and chained. Comparisons with experimental results are carried out, with reasonably good agreements being obtained, especially for cracking patterns. Major limitations refer to difficulties to establish precise levels of basic data such as the chloride ion content at concrete surface, the chloride threshold concentration that triggers active corrosion, the rate of oxide production or the rust mechanical properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multigroup diffusion codes for three dimensional LWR core analysis use as input data pre-generated homogenized few group cross sections and discontinuity factors for certain combinations of state variables, such as temperatures or densities. The simplest way of compiling those data are tabulated libraries, where a grid covering the domain of state variables is defined and the homogenized cross sections are computed at the grid points. Then, during the core calculation, an interpolation algorithm is used to compute the cross sections from the table values. Since interpolation errors depend on the distance between the grid points, a determined refinement of the mesh is required to reach a target accuracy, which could lead to large data storage volume and a large number of lattice transport calculations. In this paper, a simple and effective procedure to optimize the distribution of grid points for tabulated libraries is presented. Optimality is considered in the sense of building a non-uniform point distribution with the minimum number of grid points for each state variable satisfying a given target accuracy in k-effective. The procedure consists of determining the sensitivity coefficients of k-effective to cross sections using perturbation theory; and estimating the interpolation errors committed with different mesh steps for each state variable. These results allow evaluating the influence of interpolation errors of each cross section on k-effective for any combination of state variables, and estimating the optimal distance between grid points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a gravimetric study (based on 382 gravimetric stations in an area about 32 km2) of a nearly flat basin: the Low Andarax valley. This alluvial basin, close to its river mouth, is located in the extreme south of the province of Almería and coincides with one of the existing depressions in the Betic Cordillera. The paper presents new methodological work to adapt a published inversion approach (GROWTH method) to the case of an alluvial valley (sedimentary stratification, with density increase downward). The adjusted 3D density model reveals several features in the topography of the discontinuity layers between the calcareous basement (2,700 kg/m3) and two sedimentary layers (2,400 and 2,250 kg/m3). We interpret several low density alignments as corresponding to SE faults striking about N140?145°E. Some detected basement elevations (such as the one, previously known by boreholes, in Viator village) are apparently connected with the fault pattern. The outcomes of this work are: (1) new gravimetric data, (2) new methodological options, and (3) the resulting structural conclusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Asistimos a una evolución en la relación entre ciudad y resto del territorio. Este cambio desemboca en la eliminación del límite. La ciudad moderna se diluye en el territorio: el límite tradicional que separaba espacio urbano de espacio natural se ha diluído. La Discontinuidad entre la ciudad y el campo no se produce de forma definida (bien mediante un límite abrupto o con un gradiente) sino mediante una interfase fragmentada de funcionamiento inadecuado y que, además, no permite una clara identificación paisajística, creando problemas de eficiencia y de identidad urbana. El primer objetivo de esta investigación será determinar si existe la posibilidad de dibujar gráficamente el límite de nuestras ciudades medias para detectar si realmente existe y de qué manera se produce. A partir de aquí se realizará una catalogación del límite con el objetivo de determinar si existe algún tipo predominante sobre los demás. La comparativa entre las ciudades nos aportará la visión de en qué medida el tipo de límite es común o por el contrario singular a unas características propias de cada ciudad. El muestreo se realiza sobre un total de seis ciudades medias españolas, todas ellas capitales de provincia y con una población entre 100.000 y 300.000 habitantes: Vitoria-Gasteiz, Burgos, Pamplona, Valladolid, Lleida y Logroño. El primer paso de la metodología consiste en identificar el límite a través de su representación cartográfica. A partir de aquí se estudia qué limita con qué: cuáles son los usos urbanos que se sitúan en el borde y con qué usos no urbanos limitan. De este modo se hacen cuantificables y por lo tanto medibles. Se establecen las relaciones numéricas de estos usos del suelo y sus porcentajes. El recorrido a lo largo del límite confirma que se trata de un espacio multifuncional. Y se identifica el Límite de lo Común, un límite similar en cuanto a usos y tipologías en todas las ciudades estudiadas. La identidad en el límite se genera a partir de una imagen Genérica (el límite de lo común), una imagen Cerrada, una imagen más o menos Rural, una imagen Cultural (la huella del límite histórico) y finalmente a través de una imagen en Degradación. El límite adquiere una entidad espacial llamada Intefase, compuesta por piezas urbanas dispersas a lo largo de una franja que rodea la ciudad. Este espacio adopta usos y lógicas de localización propios, lo que le confiere una identidad única. En la segunda parte de la tesis se categorizan las diferentes tipologías del límite, el límite según Barreras, según las relaciones campo-ciudad y según aspectos visuales. Los datos confirman que la ciudad media española muestra un aspecto de ciudad dispersa, en diferentes grados de desarrollo; es una ciudad sin barreras que sin embargo se muestra cerrada hacia el campo. ABSTRACT The urban-rural relationship is currently evolving; the in-between boundary is finally been removed. The contemporary city sprawls over the countryside, and the boundary of the traditional city, the urban-rural divide, fades away. Discontinuity between the city and the countryside does not happen in a defined pattern (either by an abrupt or a gradient boundary) but by a malfunctioning rural-urban fringe fragmented and that also does not allow a clear identification landscape, creating problems of efficiency and urban identity. This research focuses on mapping our medium cities boundary, in order to identify whether it exists and how it occurs. The case studies are six medium size Spanish cities with a population between 100.000 and 300.000: Vitoria-Gasteiz, Burgos, Pamplona, Valladolid, Lleida and Logroño. Tracking the boundaries confirms that these are multifunctional spaces. This research defines a new concept called the Common Boundary, that involves similar uses and types in all the boundaries of the case studies. The boundary identity is built up with a Generic image (the common boundary), a closed image, a rural image, a cultural image (the imprint of the historic boundary) and finally with a degradation image. This boundary acquires a spatial entity called Intefase composed of sprawl urban pieces along a rural-urban fringe surrounding the city. New uses and different logical location appear in this fringe, therefore it gives it the uniqueness of the fringe. Finally this research categorizes the different boundary types: the boundary as barriers, the boundary as rural-urban relations and as visual aspects. Examined data confirms that the medium size Spanish city suffers from urban sprawl at different stages. Moreover a city without barriers and closed to the countryside is shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speciation involves the establishment of genetic barriers between closely related organisms. The extent of genetic recombination is a key determinant and a measure of genetic isolation. The results reported here reveal that genetic barriers can be established, eliminated, or modified by manipulating two systems which control genetic recombination, SOS and mismatch repair. The extent of genetic isolation between enterobacteria is a simple mathematical function of DNA sequence divergence. The function does not depend on hybrid DNA stability, but rather on the number of blocks of sequences identical in the two mating partners and sufficiently large to allow the initiation of recombination. Further, there is no obvious discontinuity in the function that could be used to define a level of divergence for distinguishing species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Ising problem consists in finding the analytical solution of the partition function of a lattice once the interaction geometry among its elements is specified. No general analytical solution is available for this problem, except for the one-dimensional case. Using site-specific thermodynamics, it is shown that the partition function for ligand binding to a two-dimensional lattice can be obtained from those of one-dimensional lattices with known solution. The complexity of the lattice is reduced recursively by application of a contact transformation that involves a relatively small number of steps. The transformation implemented in a computer code solves the partition function of the lattice by operating on the connectivity matrix of the graph associated with it. This provides a powerful new approach to the Ising problem, and enables a systematic analysis of two-dimensional lattices that model many biologically relevant phenomena. Application of this approach to finite two-dimensional lattices with positive cooperativity indicates that the binding capacity per site diverges as Na (N = number of sites in the lattice) and experiences a phase-transition-like discontinuity in the thermodynamic limit N → ∞. The zeroes of the partition function tend to distribute on a slightly distorted unit circle in complex plane and approach the positive real axis already for a 5×5 square lattice. When the lattice has negative cooperativity, its properties mimic those of a system composed of two classes of independent sites with the apparent population of low-affinity binding sites increasing with the size of the lattice, thereby accounting for a phenomenon encountered in many ligand-receptor interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The transmembrane subunit of the Glc transporter (IICBGlc), which mediates uptake and concomitant phosphorylation of glucose, spans the membrane eight times. Variants of IICBGlc with the native N and C termini joined and new N and C termini in the periplasmic and cytoplasmic surface loops were expressed in Escherichia coli. In vivo transport/in vitro phosphotransferase activities of the circularly permuted variants with the termini in the periplasmic loops 1 to 4 were 35/58, 32/37, 0/3, and 0/0% of wild type, respectively. The activities of the variants with the termini in the cytoplasmic loops 1 to 3 were 0/25, 0/4 and 24/70, respectively. Fusion of alkaline phosphatase to the periplasmic C termini stabilized membrane integration and increased uptake and/or phosphorylation activities. These results suggest that internal signal anchor and stop transfer sequences can function as N-terminal signal sequences in a circularly permuted α-helical bundle protein and that the orientation of transmembrane segments is determined by the amino acid sequence and not by the sequential appearance during translation. Of the four IICBGlc variants with new termini in periplasmic loops, only the one with the discontinuity in loop 4 is inactive. The sequences of loop 4 and of the adjacent TM7 and TM8 are conserved in all phosphoenolpyruvate-dependent carbohydrate:phosphotransferase system transporters of the glucose family.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A recent criticism that the biological species concept (BSC) unduly neglects phylogeny is examined under a novel modification of coalescent theory that considers multiple, sex-defined genealogical pathways through sexual organismal pedigrees. A competing phylogenetic species concept (PSC) also is evaluated from this vantage. Two analytical approaches are employed to capture the composite phylogenetic information contained within the braided assemblages of hereditary pathways of a pedigree: (i) consensus phylogenetic trees across allelic transmission routes and (ii) composite phenograms from quantitative values of organismal coancestry. Outcomes from both approaches demonstrate that the supposed sharp distinction between biological and phylogenetic species concepts is illusory. Historical descent and reproductive ties are related aspects of phylogeny and jointly illuminate biotic discontinuity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is now straightforward to assemble large samples of very high redshift (z ∼ 3) field galaxies selected by their pronounced spectral discontinuity at the rest frame Lyman limit of hydrogen (at 912 Å). This makes possible both statistical analyses of the properties of the galaxies and the first direct glimpse of the progression of the growth of their large-scale distribution at such an early epoch. Here I present a summary of the progress made in these areas to date and some preliminary results of and future plans for a targeted redshift survey at z = 2.7–3.4. Also discussed is how the same discovery method may be used to obtain a “census” of star formation in the high redshift Universe, and the current implications for the history of galaxy formation as a function of cosmic epoch.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanomedicine is a new branch of medicine, based on the potentiality and intrinsic properties of nanomaterials. Indeed, the nanomaterials ( i.e. the materials with nano and under micron size) can be suitable to different applications in biomedicine. The nanostructures can be used by taking advantage of their properties (for example superparamagnetic nanoparticles) or functionalized to deliver the drug in a specific target, thanks the ability to cross biological barriers. The size and the shape of 1D-nanostructures (nanotubes and nanowires) have an important role on the cell fate: their morphology plays a key role on the interaction between nanostructure and the biological system. For this reason the 1D nanostructure are interesting for their ability to mime the biological system. An implantable material or device must therefore integrate with the surrounding extracellular matrix (ECM), a complex network of proteins with structural and signaling properties. Innovative techniques allow the generation of complex surface patterns that can resemble the structure of the ECM, such as 1D nanostructures. NWs based on cubic silicon carbide (3C-SiC), either bare (3C-SiC NWs) or surrounded by an amorphous shell (3C-SiC/SiO2 core/shell NWs), and silicon oxycarbide nanowires (SiOxCy NWs) can meet the chemical, mechanical and electrical requirements for tissue engineering and have a strong potential to pave the way for the development of a novel generation of implantable nano-devices. Silicon oxycarbide shows promising physical and chemical properties as elastic modulus, bending strength and hardness, chemical durability superior to conventional silicate glasses in aggressive environments and high temperature stability up to 1300 °C. Moreover, it can easily be engineered through functionalization and decoration with macro-molecules and nanoparticles. Silicon carbide has been extensively studied for applications in harsh conditions, as chemical environment, high electric field and high and low temperature, owing to its high hardness, high thermal conductivity, chemical inertness and high electron mobility. Also, its cubic polytype (3C) is highly biocompatible and hemocompatible, and some prototypes of biomedical applications and biomedical devices have been already realized starting from 3C-SiC thin films. Cubic SiC-based NWs can be used as a biomimetic biomaterial, providing a robust and novel biocompatible biological interface . We cultured in vitro A549 human lung adenocarcinoma epithelial cells and L929 murine fibroblast cells over core/shell SiC/SiO2, SiOxCy and bare 3C-SiC nanowire platforms, and analysed the cytotoxicity, by indirect and direct contact tests, the cell adhesion, and the cell proliferation. These studies showed that all the nanowires are biocompatible according to ISO 10993 standards. We evaluated the blood compatibility through the interaction of the nanowires with platelet rich plasma. The adhesion and activation of platelets on the nanowire bundles, assessed via SEM imaging and soluble P-selectin quantification, indicated that a higher platelet activation is induced by the core/shell structures compared to the bare ones. Further, platelet activation is higher with 3C-SiC/SiO2 NWs and SiOxCyNWs, which therefore appear suitable in view of possible tissue regeneration. On the contrary, bare 3C-SiC NWs show a lower platelet activation and are therefore promising in view of implantable bioelectronics devices, as cardiovascular implantable devices. The NWs properties are suitable to allow the design of a novel subretinal Micro Device (MD). This devices is based on Si NWs and PEDOT:PSS, though the well know principle of the hybrid ordered bulk heterojunction (OBHJ). The aim is to develop a device based on a well-established photovoltaic technology and to adapt this know-how to the prosthetic field. The hybrid OBHJ allows to form a radial p–n junction on a nanowire/organic structure. In addition, the nanowires increase the light absorption by means of light scattering effects: a nanowires based p-n junction increases the light absorption up to the 80%, as previously demonstrated, overcoming the Shockley-Queisser limit of 30 % of a bulk p-n junction. Another interesting employment of these NWs is to design of a SiC based epicardial-interacting patch based on teflon that include SiC nanowires. . Such contact patch can bridge the electric conduction across the cardiac infarct as nanowires can ‘sense’ the direction of the wavefront propagation on the survival cardiac tissue and transmit it to the downstream surivived regions without discontinuity. The SiC NWs are tested in terms of toxicology, biocompatibility and conductance among cardiomyocytes and myofibroblasts.