928 resultados para PRACTICAL APPLICATIONS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ante la cuestión ¿La universidad debe amoldarse a la realidad ambiental o debe precursar nuevas realidades? se propone demostrar la factibilidad de alcanzar aproximaciones objetivables hacia el desarrollo sostenible, mediante la cooperación universitaria transfronteriza en ambientes urbanos insulares y costeros de la subregión del Caribe y Centroamérica. Se desarrolla el estudio en cuatro momentos: momento proyectivo en el cual se delimita el problema y se contextualiza su significación en el plano teórico de las sub-áreas implicadas. Fue reconocida la existencia de un problema social que, más allá de significar una actuación negativa intencional humana, revela una insuficiencia en el aprovechamiento de un potencial conocido y estratégico cuyos síntomas suelen ser: crisis ambiental generalizada; poca capacidad de respuesta por parte de las universidades ante las exigencias del desarrollo sostenible; incipientes estrategias de cooperación interinstitucional con refuerzo negativo en la atomización de esfuerzos materiales y comunicacionales. Se contrastaron enfoques y se adoptó una postura ante un hecho determinante: La universidad como articulador del desarrollo sostenible; hecho concretado en un objeto investigable: la cooperación universitaria como estrategia aún no focalizada en acercamientos al desarrollo sostenible. En la estrategia de abordaje de la investigación, la gestión fue considerada hipótesis de trabajo no positivista, generando en consecuencia la aplicación de adaptaciones a metodologías reconocidas, contextualizadas dentro de una particular visión sobre del hecho investigado. En el momento metodológico se describe el diseño concreto y los procedimientos de abordaje del problema en todas sus fases. En el momento técnico, fueron aplicados instrumentos y técnicas para obtener los datos diagnósticos e iniciar el diseño de un modelo de cooperación universitaria para el desarrollo sostenible. El diagnóstico se basó en estrategias cuali-cuantitativas que permitieron el análisis de resultados en la aplicación de encuestas, entrevistas a expertos, análisis prospectivo estructural, situacional e integrado. La construcción del modelo se desarrolló con fundamento en experiencias de cooperación previas, adoptando modelos de gestión de relevante alcance científico como referencias de aplicación. Se trata de una investigación socio-ambiental cuyo objeto de estudio la identifica como no experimental, aplicada; basada en el análisis descriptivo de datos cualicuantitativos, conducidos en un diseño de campo devenido finalmente en un proyecto factible. La información se recolectada por observación de campo, aplicación de instrumentos y dinámicas inspiradas en grupos de enfoque a escala local y nacional; con sujetos pertenecientes al sistema de educación universitaria e instancias gubernamentales y sociales propias del ámbito seleccionado. Conduce el estudio a la presentación del modelo MOP-GECUDES, descrito en cuanto a sus dimensiones, variables, estrategias; con 166 indicadores clasificados en 49 categorías, expresados en metas. Se presenta en 24 procedimientos, apoyados en 47 instrumentos específicos consistentes en aplicaciones prácticas, hojas metodológicas o manual de instrucciones para la operacionalización del modelo. Se complementa el diseño con un sistema de procedimientos surgidos de la propia experiencia, lo que le atribuye el modelo diseñado el rasgo particular de haber sido diseñado bajo sus propios principios. Faced with the question: Does the college must conform to the environmental reality or has to should promote new realities? This research aims to demonstrate the feasibility of achieving objectifiable approaches towards sustainable development through cross-border university cooperation in urban, coastal and islands space" of Caribbean and Central American. This study is developed in four stages: projective moment, in which delimits the problem, and contextualizes its importance in their theoretical subareas. It was recognized that there is a social problem, that beyond an intentional human action negative, reveals a deficience ability in exploiting potential strategic known and whose symptoms are: widespread environmental crisis, poor ability to answer on the part of universities to the demands of sustainable development; emerging interagency cooperation strategies with the aggravating fragmentation of resources. Were contrasted Approaches and was been adopted a stance before a triggering event: The university as articulator of sustainable development. Fact materialized in a study object: university cooperation as a strategy that yet has not been focused on approaches to sustainable development. In research approach, the management was considered as a working-hypothesis not positivist, consequently, were applied adjustments recognized methodologies that were contextualized within the author's personal view on the matter under investigation. En el momento metodológico se describe el diseño concreto y los procedimientos de abordaje del problema en todas sus fases. At the methodological time, describes the design and procedures to address the problem in all its phases. At the technical time, were applied tools directed to obtain diagnostic data and start designing a model of university cooperation for sustainable development. The diagnosis was based on qualitative and quantitative strategies that allowed the analysis of findings in the surveys, expert interviews, prospective analysis, and structural situational and integrated. Construction of the model was developed on the basis of cooperation experiences of the author, adopting management models relevant scientific scope and application references. It is a socio-environmental research with a not-experimental focus of study, applied, based on the descriptive analysis of qualitative and quantitative data, conducted in a field design that finally was been become a feasible project. The information is collected by field observation, application of instruments inspired in dynamic focus groups at local and national levels, with individual-subjects of the university education system; the government bodies and the social groups of the selected area. The Study leads to the presentation of the model MOP-GECUDES, described in terms of their dimensions, variables, strategies; with 166 indicators classified in 49 categories, expressed in its activities and goals. It comes in 24 procedures, supported by 47 specific instruments consisting of practical applications, methodology sheets or instructions for the operationalization of the model. Design is complemented with a system of procedures arising from the own experience. This have the particular attribute of generate a model than has been designed under its own principles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a quiet zone probing approach which deals with low dynamic range quiet zone acquisitions. Lack of dynamic range is a feature of millimeter and sub-millimeter wavelength technologies. It is consequence of the gradually smaller power generated by the instrumentation, that follows a f^α law with frequency, being α≥1 variable depending on the signal source’s technology. The proposed approach is based on an optimal data reduction scenario which redounds in a maximum signal to noise ratio increase for the signal pattern, with minimum information losses. After theoretical formulation, practical applications of the technique are proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis realiza una contribución metodológica al problema de la gestión óptima de embalses hidroeléctricos durante eventos de avenidas, considerando un enfoque estocástico y multiobjetivo. Para ello se propone una metodología de evaluación de estrategias de laminación en un contexto probabilístico y multiobjetivo. Además se desarrolla un entorno dinámico de laminación en tiempo real con pronósticos que combina un modelo de optimización y algoritmos de simulación. Estas herramientas asisten a los gestores de las presas en la toma de decisión respecto de cuál es la operación más adecuada del embalse. Luego de una detallada revisión de la bibliografía, se observó que los trabajos en el ámbito de la gestión óptima de embalses en avenidas utilizan, en general, un número reducido de series de caudales o hidrogramas para caracterizar los posibles escenarios. Limitando el funcionamiento satisfactorio de un modelo determinado a situaciones hidrológicas similares. Por otra parte, la mayoría de estudios disponibles en este ámbito abordan el problema de la laminación en embalses multipropósito durante la temporada de avenidas, con varios meses de duración. Estas características difieren de la realidad de la gestión de embalses en España. Con los avances computacionales en materia de gestión de información en tiempo real, se observó una tendencia a la implementación de herramientas de operación en tiempo real con pronósticos para determinar la operación a corto plazo (involucrando el control de avenidas). La metodología de evaluación de estrategias propuesta en esta tesis se basa en determinar el comportamiento de éstas frente a un espectro de avenidas características de la solicitación hidrológica. Con ese fin, se combina un sistema de evaluación mediante indicadores y un entorno de generación estocástica de avenidas, obteniéndose un sistema implícitamente estocástico. El sistema de evaluación consta de tres etapas: caracterización, síntesis y comparación, a fin de poder manejar la compleja estructura de datos resultante y realizar la evaluación. En la primera etapa se definen variables de caracterización, vinculadas a los aspectos que se quieren evaluar (seguridad de la presa, control de inundaciones, generación de energía, etc.). Estas variables caracterizan el comportamiento del modelo para un aspecto y evento determinado. En la segunda etapa, la información de estas variables se sintetiza en un conjunto de indicadores, lo más reducido posible. Finalmente, la comparación se lleva a cabo a partir de la comparación de esos indicadores, bien sea mediante la agregación de dichos objetivos en un indicador único, o bien mediante la aplicación del criterio de dominancia de Pareto obteniéndose un conjunto de soluciones aptas. Esta metodología se aplicó para calibrar los parámetros de un modelo de optimización de embalse en laminación y su comparación con otra regla de operación, mediante el enfoque por agregación. Luego se amplió la metodología para evaluar y comparar reglas de operación existentes para el control de avenidas en embalses hidroeléctricos, utilizando el criterio de dominancia. La versatilidad de la metodología permite otras aplicaciones, tales como la determinación de niveles o volúmenes de seguridad, o la selección de las dimensiones del aliviadero entre varias alternativas. Por su parte, el entorno dinámico de laminación al presentar un enfoque combinado de optimización-simulación, permite aprovechar las ventajas de ambos tipos de modelos, facilitando la interacción con los operadores de las presas. Se mejoran los resultados respecto de los obtenidos con una regla de operación reactiva, aun cuando los pronósticos se desvían considerablemente del hidrograma real. Esto contribuye a reducir la tan mencionada brecha entre el desarrollo teórico y la aplicación práctica asociada a los modelos de gestión óptima de embalses. This thesis presents a methodological contribution to address the problem about how to operate a hydropower reservoir during floods in order to achieve an optimal management considering a multiobjective and stochastic approach. A methodology is proposed to assess the flood control strategies in a multiobjective and probabilistic framework. Additionally, a dynamic flood control environ was developed for real-time operation, including forecasts. This dynamic platform combines simulation and optimization models. These tools may assist to dam managers in the decision making process, regarding the most appropriate reservoir operation to be implemented. After a detailed review of the bibliography, it was observed that most of the existing studies in the sphere of flood control reservoir operation consider a reduce number of hydrographs to characterize the reservoir inflows. Consequently, the adequate functioning of a certain strategy may be limited to similar hydrologic scenarios. In the other hand, most of the works in this context tackle the problem of multipurpose flood control operation considering the entire flood season, lasting some months. These considerations differ from the real necessity in the Spanish context. The implementation of real-time reservoir operation is gaining popularity due to computational advances and improvements in real-time data management. The methodology proposed in this thesis for assessing the strategies is based on determining their behavior for a wide range of floods, which are representative of the hydrological forcing of the dam. An evaluation algorithm is combined with a stochastic flood generation system to obtain an implicit stochastic analysis framework. The evaluation system consists in three stages: characterizing, synthesizing and comparing, in order to handle the complex structure of results and, finally, conduct the evaluation process. In the first stage some characterization variables are defined. These variables should be related to the different aspects to be evaluated (such as dam safety, flood protection, hydropower, etc.). Each of these variables characterizes the behavior of a certain operating strategy for a given aspect and event. In the second stage this information is synthesized obtaining a reduced group of indicators or objective functions. Finally, the indicators are compared by means of an aggregated approach or by a dominance criterion approach. In the first case, a single optimum solution may be achieved. However in the second case, a set of good solutions is obtained. This methodology was applied for calibrating the parameters of a flood control model and to compare it with other operating policy, using an aggregated method. After that, the methodology was extent to assess and compared some existing hydropower reservoir flood control operation, considering the Pareto approach. The versatility of the method allows many other applications, such as determining the safety levels, defining the spillways characteristics, among others. The dynamic framework for flood control combines optimization and simulation models, exploiting the advantages of both techniques. This facilitates the interaction between dam operators and the model. Improvements are obtained applying this system when compared with a reactive operating policy, even if the forecasts deviate significantly from the observed hydrograph. This approach contributes to reduce the gap between the theoretical development in the field of reservoir management and its practical applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the middle of the twentieth century, Rafael Lorente de Nó (1902?1990) introduced the fundamental concept of the ?elementary cortical unit of operation,? proposing that the cerebral cortex is formed of small cylinders containing vertical chains of neurons (Lorente de Nó, 1933, 1938). On the basis of this idea, the hypothesis was later developed of the columnar organization of the cerebral cortex, primarily following the physiological and anatomical studies of Vernon Mountcastle, David Hubel, Torsten Wiesel, János Szentágothai, Ted Jones, and Pasko Rakic (for a review of these early studies, see Mountcastle, 1998). The columnar organization hypothesis is currently the most widely adopted to explain the cortical processing of information, making its study of potential interest to any researcher interested in this tissue, both in a healthy and pathological state. However, it is frequently remarked that the nomenclature surrounding this hypothesis often generates problems, as the term ?Column? is used freely and promiscuously to refer to multiple, distinguishable entities, such as cellular or dendritic minicolumns or afferent macrocolumns, with respective diameters of menor que50 and 200?500 ?m. Another problem is the degree to which classical criteria may need to be modified (shared response properties, shared input, and common output) and if so, how. Moreover, similar problems arise when we consider the need to define area-specific and species-specific variations. Finally, and what is more an ultimate goal than a problem, it is still necessary to achieve a better fundamental understanding of what columns are and how they are used in cortical processes. Accordingly, it is now very important to translate recent technical advances and new findings in the neurosciences into practical applications for neuroscientists, clinicians, and for those interested in comparative anatomy and brain evolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, an innovative approach to perform distributed Bayesian inference using a multi-agent architecture is presented. The final goal is dealing with uncertainty in network diagnosis, but the solution can be of applied in other fields. The validation testbed has been a P2P streaming video service. An assessment of the work is presented, in order to show its advantages when it is compared with traditional manual processes and other previous systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The severe accidents suffered by bridges during recent earthquake show that more careful analysis are needed to guarantee their behaviour. In particular simplified non-linear analysis could be useful to bridge the gap between theoretical research and practical applications. This paper presents one of those simplified methods that can be applied for first designs or to retrofitting of groups of bridges.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las bandas de las denominadas ondas milimétricas y submilimétricas están situadas en la región del espectro entre las microondas y el infrarrojo. La banda de milimétricas se sitúa entre 30 y 300 GHz, considerada normalmente como la banda EHF (Extremely High Frequency). El margen de frecuencias entre 300 y 3000 GHz es conocido como la banda de ondas submilimétricas o de terahercios (THz). Sin embargo, no toda la comunidad científica está de acuerdo acerca de las frecuencias que limitan la banda de THz. De hecho, 100 GHz y 10 THz son considerados comúnmente como los límites inferior y superior de dicha banda, respectivamente. Hasta hace relativamente pocos años, la banda de THz sólo había sido explotada para aplicaciones en los campos de la espectroscopía y la radioastronomía. Los avances tecnológicos en la electrónica de microondas y la óptica lastraron el desarrollo de la banda de THz. Sin embargo, investigaciones recientes han demostrado las ventajas asociadas a operar en estas longitudes de onda, lo que ha aumentado el interés y los esfuerzos dedicados a la tecnología de THz. A pesar de que han surgido un gran número de aplicaciones, una de las más prometedoras está en el campo de la vigilancia y la seguridad. Esta tesis está dedicada al desarrollo de radares de onda continua y frecuencia modulada (CW-LFM) de alta resolución en la banda de milimétricas, más concretamente, en las ventanas de atenuación situadas en 100 y 300 GHz. Trabajar en estas bandas de frecuencia presenta beneficios tales como la capacidad de las ondas de atravesar ciertos materiales como la ropa o el papel, opacos en el rango visible, y la posibilidad de usar grandes anchos de banda, obteniéndose así elevadas resoluciones en distancia. Los anchos de banda de 9 y 27 GHz seleccionados para los sistemas de 100 y 300 GHz, respectivamente, proporcionan resoluciones en distancia alrededor y por debajo del cm. Por otro lado, las aplicaciones objetivo se centran en la adquisición de imágenes a corto alcance. En el caso del prototipo a 300 GHz, su diseño se ha orientado a aplicaciones de detección a distancia en escenarios de vigilancia y seguridad. La naturaleza no ionizante de esta radiación supone una ventaja frente a las alternativas tradicionalmente usadas tales como los sistemas de rayos X. La presente tesis se centra en el proceso de diseño, implementación y caracterización de ambos sistemas así como de la validación de su funcionamiento. Se ha elegido una solución basada en componentes electrónicos, y no ópticos, debido a su alta fiabilidad, volumen reducido y amplia disponibilidad de componentes comerciales. Durante el proceso de diseño e implementación, se han tenido en cuenta varias directrices tales como la minimización del coste y la versatilidad de los sistemas desarrollados para hacer posible su aplicación para múltiples propósitos. Ambos sistemas se han utilizado en diferentes pruebas experimentales, obteniendo resultados satisfactorios. Aunque son sólo ejemplos dentro del amplio rango de posibles aplicaciones, la adquisición de imágenes ISAR de modelos de blancos a escala para detección automática así como la obtención de datos micro-Range/micro- Doppler para el análisis de patrones humanos han validado el funcionamiento del sistema a 100 GHz. Por otro lado, varios ejemplos de imágenes 3D obtenidas a 300 GHz han demostrado las capacidades del sistema para su uso en tareas de seguridad y detección a distancia. ABSTRACT The millimeter- and submillimeter-wave bands are the regions of the spectrum between the microwaves and the infrared (IR). The millimeter-wave band covers the range of the spectrum from 30 to 300 GHz, which is usually considered as the extremely high frequency (EHF) band. The range of frequencies between 300 and 3000 GHz is known as the submillimeter-wave or terahertz (THz) band. Nevertheless, the boundaries of the THz band are not accepted by the whole research community. In fact, 100 GHz and 10 THz are often considered by some authors as the lower and upper limit of this band, respectively. Until recently, the THz band had not been exploited for practical applications, with the exception of minor uses in the fields of spectroscopy and radio astronomy. The advancements on microwave electronics and optical technology left the well-known THz gap undeveloped. However, recent research has unveiled the advantages of working at these frequencies, which has motivated the increase in research effort devoted to THz technology. Even though the range of upcoming applications is wide, the most promising ones are in the field of security and surveillance. Particularly, this Ph.D. thesis deals with the development of high resolution continuouswave linear-frequency modulated (CW-LFM) radars in the millimeter-wave band, namely, in the attenuation windows located at 100 and 300 GHz. Working at these wavelengths presents several benefits such as the ability of radiation to penetrate certain materials, visibly opaque, and the great availability of bandwidth at these frequencies, which leads to high range resolution. The selected bandwidths of 9 and 27 GHz for these systems at 100 and 300 GHz, respectively, result in cm and sub-cm range resolution. On the other hand, the intended applications are in the field of short-range imaging. In particular, the design of the 300-GHz prototype is oriented to standoff detection for security and surveillance scenarios. The non-ionizing nature of this radiation allows safety concerns to be alleviated, in clear contrast to other traditional alternatives such as X-rays systems. This thesis is focused on the design, implementation and characterization process of both systems as well as the experimental assessment of their performances. An electronic approach has been selected instead of an optical solution so as to take advantage of its high reliability, reduced volume and the availability of commercial components. Through the whole design and implementation process, several guidelines such as low cost and hardware versatility have been also kept in mind. Taking advantage of that versatility, different applications can be carried out with the same hardware concept. Both radar systems have been used in several experimental trials with satisfactory results. Despite being mere examples within the wide range of fields of application, ISAR imaging of scaled model targets for automatic target recognition and micro-Range/micro-Doppler analysis of human patterns have validated the system performance at 100 GHz. In addition, 3D imaging examples at 300 GHz demonstrate the radar system’s capabilities for standoff detection and security tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with the theoretical method and the modelling problems on the analysis of the Pyrotechnic Shock Propagation in the Vehicle Equipment Bay Structure of the ARIANE 5 during the separation of the upper stage. This work has been developed under a contract with the Spanish Firm Construcciones Aeronáuticas S.A. From all the analysis and the studies it can be concluded that: 1.- The mathematical method used for the study of the pyrotechnic shock phenomena is very well suited for conducting parametric studies. 2.- A careful model of the structure should be developed taking into account the realistic stiffness and dissipation properties at the junctions. 3.- The load produced by the pyrotechnic device should be carefully calibrated to reach a good agreement between theoretical and test results. 4.- In any case with the adquired experience it can be said that with the modelling of continuous elements the order of magnitude of the accelerations can be predicted with the accuracy needed in practical applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Polymer modified bitumens, PMBs, are usually prepared at high temperature and subsequently stored for a period of time, also at high temperature. The stability of PMBs, in these conditions, has a decisive influence in order to obtain the adequate performances for practical applications. In this article the attention is focused in the analysis of the factors that determine the stability of styrene–butadiene–styrene copolymer (SBS)/sulfur modified bitumens when the mixtures are maintained at high temperature. Bitumens from different crude oil sources were used to prepare SBS/sulfur modified bitumens. Changes in the values of viscosity, softening point, as well as in the morphology of PMB samples, stored at 160 °C, were related to the bitumen chemical composition and to the amount of asphaltene micelles present in the neat bitumen used in their preparation El trabajo se centra en el estudio de la influencia de la estructura /composición del betún sobre la compatibilidad del sistema betún/SBS. Cuatro betunes provenientes de dos crudos distintos se seleccionaron y sus mezclas se utilizaron para preparar betunes modificados con contenidos de SBS del 3% en peso

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Carbon fiber (CF)-reinforced high-temperature thermoplastics such as poly(phenylene sulphide) (PPS) are widely used in structural composites for aerospace and automotive applications. The porosity of CF-reinforced polymers is a very important topic for practical applications since there is a direct correlation between void content and mechanical properties. In this study, inorganic fullerene-like tungsten disulphide (IF-WS2) lubricant nanoparticles were used to manufacture PPS/IF-WS2/CF laminates via melt-blending and hot-press processing, and the effect of IF-WS2 loading on the quality, thermal and mechanical behaviour of the hybrid composites was investigated. The addition of IF-WS2 improved fiber impregnation, resulting in lower degree of porosity and increased delamination resistance, compression and flexural properties; their reinforcement effect was greater at temperatures above the glass transition (Tg). IF-WS2 contents higher than 0.5 wt % increased Tg and the heat deflection temperature while reduced the coefficient of thermal expansion. The multiscale laminates exhibited higher ignition point and notably reduced peak heat release rate compared to PPS/CF. The coexistence of micro- and nano-scale fillers resulted in synergistic effects that enhanced the stiffness, strength, thermal conductivity and flame retardancy of the matrix. The results presented herein demonstrate that the IF-WS2 are very promising nanofillers to improve the thermomechanical properties of conventional thermoplastic/CF composites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En esta tesis se propone un procedimiento para evaluar la resistencia mecánica de obleas de silicio cristalino y se aplica en diferentes casos válidos para la industria. En el sector de la industria fotovoltaica predomina la tecnología basada en paneles de silicio cristalino. Estos paneles están compuestos por células solares conectadas en serie y estas células se forman a partir de obleas de silicio. Con el objetivo de disminuir el coste del panel, en los últimos años se ha observado una clara tendencia a la reducción del espesor de las obleas. Esta reducción del espesor modifica la rigidez de las obleas por lo que ha sido necesario modificar la manera tradicional de manipularlas con el objetivo de mantener un bajo ratio de rotura. Para ello, es necesario conocer la resistencia mecánica de las obleas. En la primera parte del trabajo se describen las obleas de silicio, desde su proceso de formación hasta sus propiedades mecánicas. Se muestra la influencia de la estructura cristalográfica en la resistencia y en el comportamiento ya que el cristal de silicio es anisótropo. Se propone también el método de caracterización de la resistencia. Se utiliza un criterio probabilista basado en los métodos de dimensionamiento de materiales frágiles en el que la resistencia queda determinada por los parámetros de la ley de Weibull triparamétrica. Se propone el procedimiento para obtener estos parámetros a partir de campañas de ensayos, modelización numérica por elementos finitos y un algoritmo iterativo de ajuste de los resultados. En la segunda parte de la tesis se describen los diferentes tipos de ensayos que se suelen llevar a cabo con este material. Se muestra además, para cada uno de los ensayos descritos, un estudio comparativo de diferentes modelos de elementos finitos simulando los ensayos. Se comparan tanto los resultados aportados por cada modelo como los tiempos de cálculo. Por último, se presentan tres aplicaciones diferentes donde se ha aplicado este procedimiento de estudio. La primera aplicación consiste en la comparación de la resistencia mecánica de obleas de silicio en función del método de crecimiento del lingote. La resistencia de las tradicionales obleas monocristalinas obtenidas por el método Czochralski y obleas multicristalinas es comparada con las novedosas obleas quasi-monocristalinas obtenidas por métodos de fundición. En la segunda aplicación se evalúa la profundidad de las grietas generadas en el proceso de corte del lingote en obleas. Este estudio se realiza de manera indirecta: caracterizando la resistencia de grupos de obleas sometidas a baños químicos de diferente duración. El baño químico reduce el espesor de las obleas eliminando las capas más dañadas. La resistencia de cada grupo es analizada y la comparación permite obtener la profundidad de las grietas generadas en el proceso de corte. Por último, se aplica este procedimiento a un grupo de obleas con características muy especiales: obleas preparadas para formar células de contacto posterior EWT. Estas obleas presentan miles de agujeros que las debilitan considerablemente. Se aplica el procedimiento de estudio propuesto con un grupo de estas obleas y se compara la resistencia obtenida con un grupo de referencia. Además, se propone un método simplificado de estudio basado en la aplicación de una superficie de intensificación de tensiones. ABSTRACT In this thesis, a procedure to evaluate the mechanical strength of crystalline silicon wafers is proposed and applied in different studies. The photovoltaic industry is mainly based on crystalline silicon modules. These modules are composed of solar cells which are based on silicon wafers. Regarding the cost reduction of solar modules, a clear tendency to use thinner wafers has been observed during last years. Since the stiffness varies with thickness, the manipulation techniques need to be modified in order to guarantee a low breakage rate. To this end, the mechanical strength has to be characterized correctly. In the first part of the thesis, silicon wafers are described including the different ways to produce them and the mechanical properties of interest. The influence of the crystallographic structure in the strength and the behaviour (the anisotropy of the silicon crystal) is shown. In addition, a method to characterize the mechanical strength is proposed. This probabilistic procedure is based on methods to characterize brittle materials. The strength is characterized by the values of the three parameters of the Weibull cumulative distribution function (cdf). The proposed method requires carrying out several tests, to simulate them through Finite Element models and an iterative algorithm in order to estimate the parameters of the Weibull cdf. In the second part of the thesis, the different types of test that are usually employed with these samples are described. Moreover, different Finite Element models for the simulation of each test are compared regarding the information supplied by each model and the calculation times. Finally, the method of characterization is applied to three examples of practical applications. The first application consists in the comparison of the mechanical strength of silicon wafers depending on the ingot growth method. The conventional monocrystalline wafers based on the Czochralski method and the multicrystalline ones are compared with the new quasi-monocrystalline substrates. The second application is related to the estimation of the crack length caused by the drilling process. An indirect way is used to this end: several sets of silicon wafers are subjected to chemical etchings of different duration. The etching procedure reduces the thickness of the wafers removing the most damaged layers. The strength of each set is obtained by means of the proposed method and the comparison permits to estimate the crack length. At last, the procedure is applied to determine the strength of wafers used for the design of back-contact cells of type ETW. These samples are drilled in a first step resulting in silicon wafers with thousands of tiny holes. The strength of the drilled wafers is obtained and compared with the one of a standard set without holes. Moreover, a simplified approach based on a stress intensification surface is proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of parameterizing approximately algebraic curves and surfaces is an active research field, with many implications in practical applications. The problem can be treated locally or globally. We formally state the problem, in its global version for the case of algebraic curves (planar or spatial), and we report on some algorithms approaching it, as well as on the associated error distance analysis.