30 resultados para Basophil Degranulation Test -- methods
Resumo:
Quasi-monocrystalline silicon wafers have appeared as a critical innovation in the PV industry, joining the most favourable characteristics of the conventional substrates: the higher solar cell efficiencies of monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost and the full square-shape of the multicrystalline ones. However, the quasi-mono ingot growth can lead to a different defect structure than the typical Cz-Si process. Thus, the properties of the brand-new quasi-mono wafers, from a mechanical point of view, have been for the first time studied, comparing their strength with that of both Cz-Si mono and typical multicrystalline materials. The study has been carried out employing the four line bending test and simulating them by means of FE models. For the analysis, failure stresses were fitted to a three-parameter Weibull distribution. High mechanical strength was found in all the cases. The low quality quasi-mono wafers, interestingly, did not exhibit critical strength values for the PV industry, despite their noticeable density of extended defects.
Resumo:
This work is based on the prototype High Engineering Test Reactor (HTTR) of the Japan Agency of Energy Atomic (JAEA). Its objective is to describe an adequate deterministic model to be used in the assessment of its design safety margins via damage domains. The concept of damage domain is defined and it is shown its relevance in the ongoing effort to apply dynamic risk assessment methods and tools based on the Theory of Stimulated Dynamics (TSD). To illustrate, we present results of an abnormal control rod (CR) withdrawal during subcritical condition and its comparison with results obtained by JAEA. No attempt is made yet to actually assess the detailed scenarios, rather to show how the approach may handle events of its kind
Resumo:
Photovoltaic modules based on thin film technology are gaining importance in the photovoltaic market, and module installers and plant owners have increasingly begun to request methods of performing module quality control. These modules pose additional problems for measuring power under standard test conditions (STC), beyond problems caused by the temperature of the module and the ambient variables. The main difficulty is that the modules’ power rates may vary depending both on the amount of time they have been exposed to the sun during recent hours and on their history of sunlight exposure. In order to assess the current state of the module, it is necessary to know its sunlight exposure history. Thus, an easily accomplishable testing method that ensures the repeatability of the measurements of the power generated is needed. This paper examines different tests performed on commercial thin film PV modules of CIS, a-Si and CdTe technologies in order to find the best way to obtain measurements. A method for obtaining indoor measurements of these technologies that takes into account periods of sunlight exposure is proposed. Special attention is paid to CdTe as a fast growing technology in the market.
Resumo:
Desde que el Hombre era morador de las cavernas ha sido manifiesto su deseo innato por grabar y reproducir "instantáneas con las que perpetuarse o sobre las que mirarse ". La aparición y desarrollo de la fotografía como medio para poder captar y fijar "la imagen directa de la realidad circundante " pronto se convierte en un nuevo lenguaje estético y poético que permite al artista la interpretación y reflexión de lo observado. Se imprime a la imagen el carácter de la mirada del fotógrafo, estableciendo un diálogo conceptual con el juego de luces. La presente Tesis plantea la creación de una nueva piel de arquitectura mediante la impresión fotográfica sobre materiales pétreos. La búsqueda de la expresividad de los materiales como soporte de expresión artística implica un cambio de escala al trasladar la instantánea fotográfica a la arquitectura y la aplicación de un nuevo soporte al imprimir la fotografía sobre materiales arquitectónicos. Se justifica la elección del dispositivo láser CO2 como sistema de impresión fotográfica sobre los materiales pétreos arquitectónicos, como la técnica que permite la unión física de la imagen y el proyecto arquitectónico, generando un valor añadido a través del arte de la fotografía. Se justifica la elección de los materiales investigados, Silestone® Blanco Zeus y GRC® con TX Active® Aria, de forma que la investigación de esta nueva piel de arquitectura abarca tanto la envolvente del edificio como su volumen interior, permitiendo cerrar el círculo arquitectónico "in&out" y dota al proyecto arquitectónico de un valor añadido al introducir conceptos sostenibles de carácter estético y medioambiental. Se realiza una consulta a las empresas del sector arquitectónico relacionadas directamente con la producción y distribución de los materiales Silestone® y GRC®, así como a las empresas especializadas en sistemas de impresión fotográfica sobre materiales, acerca del estado del arte. Se recorre la Historia de la fotografía desde sus orígenes hasta el desarrollo de la era digital y se analiza su condición artística. Se recopilan los sistemas de impresión fotográfica que han evolucionado en paralelo con los dispositivos de captura de la instantánea fotográfica y se describe en profundidad el sistema de impresión fotográfica mediante dispositivo láser CO2. Se describen los procesos de fabricación, las características técnicas, cualidades y aplicaciones de los materiales pétreos arquitectónicos Silestone® Blanco Zeus y GRC® con TX Active® Aria. Se explica la técnica utilizada para la captación de la imagen fotográfica, su justificación artística y su proceso de impresión mediante dispositivo láser CO2 bajo diferentes parámetros sobre muestras de los materiales arquitectónicos investigados. Se comprueba la viabilidad de desarrollo de la nueva piel de arquitectura sobre Silestone® Blanco Zeus y GRC® con TX Active® Aria sometiendo a las piezas impresas bajo diferentes parámetros a tres ensayos de laboratorio. En cada uno de ellos se concreta el objetivo y procedimiento del ensayo, la enumeración de las muestras ensayadas y los parámetros bajo los que han sido impresas, el análisis de los resultados del ensayo y las conclusiones del ensayo. Ensayo de amplitud térmica. Se determina el grado de afectación de las imágenes impresas bajo la acción de contrastes térmicos. Series de muestras de Silestone® Blanco Zeus y GRC® con TX Active® Aria impresas con láser CO2 se someten a ciclos de contraste frío-calor de 12 horas de duración para una amplitud térmica total de 102°C. Se realiza una toma sistemática de fotografías microscópicas con lupa de aumento de cada pieza antes y después de los ciclos frío-calor y la observación de las transformaciones que experimentan los materiales bajo la acción del láser CO2. Ensayo de exposición a la acción de la radiación ultravioleta (UV). Se determina el grado de afectación de las imágenes impresas al activar la capacidad autolimpiante de partículas orgánicas. Una serie de muestras de GRC® con TX Active® Aria impresa con láser CO2 se someten a ciclos de exposición de radiación ultravioleta de 26 horas de duración. Se somete la serie a un procedimiento de activación del aditivo TX Active®. Se simula la contaminación orgánica mediante la aplicación controlada de Rodamina B, tinte orgánico, y se simula la radiación UV mediante el empleo de una bombilla de emisión de rayos ultravioleta. Se realiza una toma sistemática de fotografías macroscópicas de la serie de muestras ensayadas: antes de aplicación de la Rodamina B, momento 00:00h, momento 04:00h y momento 26:00h del ensayo. Se procede a la descarga y análisis del histograma de las fotografías como registro de la actividad fotocatalítica. Ensayo de la capacidad autodescontaminante del GRC® con TX Active® impreso con láser CO2. Se comprueba si la capacidad autodescontaminante del GRC® con TX Active® se ve alterada como consecuencia de la impresión de la imagen fotográfica impresa con láser CO2. Serie de muestras de GRC® con TX Active® Aria impresa con láser CO2 se someten a test de capacidad autodescontaminante: atmósfera controlada y contaminada con óxidos de nitrógeno en los que se coloca cada pieza ensayada bajo la acción de una lámpara de emisión de radiación ultravioleta (UV). Se registra la actividad fotocatalítica en base a la variación de concentración de óxido de nitrógeno. Se recopila el análisis e interpretación de los resultados de los ensayos de laboratorio y se elaboran las conclusiones generales de la investigación. Se sintetizan las futuras líneas de investigación que, a partir de las investigaciones realizadas y de sus conclusiones generales, podrían desarrollarse en el ámbito de la impresión fotográfica sobre materiales arquitectónicos. Se describe el rendimiento tecnológico y artístico generado por las investigaciones previas que han dado origen y desarrollo a la Tesis Doctoral. ABSTRACT Since ancient time, humanity has been driven by an innate wish to reproduce and engrave "snapshots that could help to perpetúate or to look at one self". Photography's birth and its development as a mean to capture and fix "the direct image of the surrounding reality" quickly becomes a new aesthetical and poetical language allowing the artist to interpret and think over what has been observed. The photographer's eye is imprinted onto the image, and so the conceptual dialogue between the artist and the light beams begins. The current thesis suggests the creation of a new architectural skin through photography imprinting over stony materials. The search for material's expressiveness as a medium of artistic expression involves a change of scale as it transfers photographic snapshot into architecture and the use of a new photographic printing support over architectural materials. CO2 laser is the chosen printing system for this technique as it allows the physical union of the image and the architectonic project, generating an added value through the art of photography. The researched materials selected were Silestone®, Blanco Zeus and GRC® with TX Active® Aria. This new architectural skin contains the building surrounding as well as its interior volume, closing the architectonic "in & out" circle and adding a value to the project by introducing aesthetical and environmental sustainable concepts. Architecture companies related to the production and distribution of materials like Silestone® and GRC®, as well as companies specialized in photography printing over materials were consulted to obtain a State of the Art. A thorough analysis of photography's History from its origins to the digital era development was made and its artistic condition was studied in this thesis. In this study the author also makes a compilation of several photographic printing systems that evolved together with photographic snapshot devices. The CO2 laser-based photographic printing system is also described in depth. Regarding stony materials of architecture like Silestone®, Blanco Zeus and GRC® with TX Active® Aria, the present study also describes their manufacture processes as well as technical features, quality and application. There is also an explanation about the technique to capture the photographic image, its artistic justification and its CO2 laser-based printing system over the researched materials under different parameters. We also tested the feasibility of this new architectural skin over Silestone® Blanco Zeus and GRC® with TX Active® Aria. The pieces were tested under different parameters in three laboratory trials. Each trial comprises of an explanation of its objective and its process, the samples were numbered and the printing parameters were specified. Finally, with the analysis of the results some conclusions were drawn. In the thermal amplitude trial we tried to determine how printed images were affected as a result of the action of thermal contrasts. Series of samples of Silestone® Blanco Zeus and GRC® with TX Active® Aria printed with CO2 laser were subjected to several 12h warm-cold cycles for thermal total amplitude of 102oc. Each sample was captured systematically with microscopic enhanced lenses before and after cold-warm cycles. The changes experienced by these materials under the effect of CO2 laser were observed and recorded. Trial regarding the Ultraviolet Radiation (UR) effect on images. We determined to which extent printed images were affected once the self-cleaning organic particles were activated. This time GRC® with TX Active® Aria samples printed with CO2 laser were exposed to a 26h UR cycle. The samples were subjected to the activation of TX Active® additive. Through the controlled application of Rodamine B and organic dye we were able to simulate the organic contamination process. UR was simulated using an ultraviolet beam emission bulb. A systematic capture of macroscopic pictures of the tested sample series was performed at different time points: before Rodamine B application, at moment 00:00h, moment 04:00h and moment 26:00h of the trial. Picture's histogram was downloaded and analyzed as a log of photocatalytic activity. Trial regarding the self-decontaminating ability of GRC® with TX Active® printed with CO2 laser. We tested if this self-decontaminating ability is altered as a result of CO2 laser printed image. GRC® with TX Active® Aria samples printed with CO2 laser, were subject to self-decontaminating ability tests with controlled and nitrogen oxide contaminated atmosphere. Each piece was put under the action of an UR emission lamp. Photocatalytic activity was recorded according to the variation in nitrogen oxide concentration. The results of the trial and their interpretation as well as the general conclusions of the research are also compiled in the present study. Study conclusions enable to draw future research lines of potential applications of photographic printing over architecture materials. Previous research generated an artistic and technological outcome that led to the development of this doctoral thesis.
Resumo:
In this paper we investigate whether conventional text categorization methods may suffice to infer different verbal intelligence levels. This research goal relies on the hypothesis that the vocabulary that speakers make use of reflects their verbal intelligence levels. Automatic verbal intelligence estimation of users in a spoken language dialog system may be useful when defining an optimal dialog strategy by improving its adaptation capabilities. The work is based on a corpus containing descriptions (i.e. monologs) of a short film by test persons yielding different educational backgrounds and the verbal intelligence scores of the speakers. First, a one-way analysis of variance was performed to compare the monologs with the film transcription and to demonstrate that there are differences in the vocabulary used by the test persons yielding different verbal intelligence levels. Then, for the classification task, the monologs were represented as feature vectors using the classical TF–IDF weighting scheme. The Naive Bayes, k-nearest neighbors and Rocchio classifiers were tested. In this paper we describe and compare these classification approaches, define the optimal classification parameters and discuss the classification results obtained.
Resumo:
System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.
Resumo:
El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.
Resumo:
The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of beff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of beff .
Resumo:
En este documento se detalla, la planificación y elaboración de un paquete que respeta el estándar S4 de programación en lenguaje R. El paquete consiste en una serie de métodos y clases para la generación de exámenes tipos test y soluciones a partir de un archivo xls, que hace las funciones de una base de datos. El diseño propuesto está orientado a objetos y desarrolla un conjunto de clases que representan los contenidos de una prueba de evaluación tipo test: enunciados, peguntas y respuestas. Se ha realizado una implementación sencilla de un prototipo con las funciones básicas necesarias para generar los tests. Además se ha generado la documentación necesaria para crear el paquete, esto significa que cada método tiene una página de ayuda, que se podrá consultar desde un terminal con R, dicha documentación incluye ejemplos de ejecución de cada método.---ABSTRACT---In this document is detailed the elaboration and development of a package that meets the standard S4 of programming language R. This package consists of a group of methods and classes used for the generation of test exams and their solutions starting from a xls format file wich plays the role of a data base at the same time. These classes have been grouped in a way that the user could have a complete and easy vision of them. This division has been done by using data storage and functions whose tasks are more or less the same. Furthermore, the necessary documentation to create this package has also been generated, that means that every method has a help page wich can be called from a R terminal if necessary. This documentation has examples of the execution of every method.
Resumo:
This paper presents an extensive and useful comparison of existing formulas to estimate wave forces on crown walls. The paper also provides valuable insights into crown wall behaviour, suggesting the use of formulas for prior sizing and recommending, in any case, tests on a physical model in order to confirm the final design. The authors helpfully advise to use more than one method to obtain results closer to reality, always taking into account the test conditions under which each formula was developed
Resumo:
Background: One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. Results: We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as “which particular data was input to a particular workflow to test a particular hypothesis?”, and “which particular conclusions were drawn from a particular workflow?”. Conclusions: Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.
Resumo:
Context. This thesis is framed in experimental software engineering. More concretely, it addresses the problems arisen when assessing process conformance in test-driven development experiments conducted by UPM's Experimental Software Engineering group. Process conformance was studied using the Eclipse's plug-in tool Besouro. It has been observed that Besouro does not work correctly in some circumstances. It creates doubts about the correction of the existing experimental data which render it useless. Aim. The main objective of this work is the identification and correction of Besouro's faults. A secondary goal is fixing the datasets already obtained in past experiments to the maximum possible extent. This way, existing experimental results could be used with confidence. Method. (1) Testing Besouro using different sequences of events (creation methods, assertions etc..) to identify the underlying faults. (2) Fix the code and (3) fix the datasets using code specially created for this purpose. Results. (1) We confirmed the existence of several fault in Besouro's code that affected to Test-First and Test-Last episode identification. These faults caused the incorrect identification of 20% of episodes. (2) We were able to fix Besouro's code. (3) The correction of existing datasets was possible, subjected to some restrictions (such us the impossibility of tracing code size increase to programming time. Conclusion. The results of past experiments dependent upon Besouro's data could no be trustable. We have the suspicion that more faults remain in Besouro's code, whose identification requires further analysis.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.
Resumo:
Despite its ornamental value, some Melocactus species (Cactaceae) are threatened by several factors and only sexual propagation is possible. Thus, artificial seed banks are an appropriate method for their ex situ conservation. Suitable methods to germinate and to monitor viability are necessary for the seeds of these species. This work explored the use of tetrazolium test for monitoring seed viability of two Melocactus species. There was a correlation between the percentage of stained embryos, considering different cover area or tone stain, and the germination percentage. Unviable embryos did not stain. The embryos curved shape can explain the partial stain on the most of cases.