976 resultados para FUNCTIONAL APPLICATIONS
Resumo:
Graphene, which is a two-dimensional carbon material, exhibits unique properties that promise its potential applications in photovoltaic devices. Dye-sensitized solar cell (DSSC) is a representative of the third generation photovoltaic devices. Therefore, it is important to synthesize graphene with special structures, which possess excellent properties for dye-sensitized solar cells. This dissertation research was focused on (1) the effect of oxygen content on the structure of graphite oxide, (2) the stability of graphene oxide solution, (3) the application of graphene precipitate from graphene oxide solution as counter electrode for DSSCs, (4) the development of a novel synthesis method for the three-dimensional graphene with honeycomb-like structure, and (5) the exploration of honeycomb structured graphene (HSG) as counter electrodes for DSSCs. Graphite oxide is a crucial precursor to synthesize graphene sheets via chemical exfoliation method. The relationship between the oxygen content and the structures of graphite oxides was still not explored. In this research, the oxygen content of graphite oxide is tuned by changing the oxidation time and the effect of oxygen content on the structure of graphite oxide was evaluated. It has been found that the saturated ratio of oxygen to carbon is 0.47. The types of functional groups in graphite oxides, which are epoxy, hydroxyl, and carboxylgroups, are independent of oxygen content. However, the interplanar space and BET surface area of graphite oxide linearly increases with increasing O/C ratio. Graphene oxide (GO) can easily dissolve in water to form a stable homogeneous solution, which can be used to fabricate graphene films and graphene based composites. This work is the first research to evaluate the stability of graphene oxide solution. It has been found that the introduction of strong electrolytes (HCl, LiOH, LiCl) into GO solution can cause GO precipitation. This indicates that the electrostatic repulsion plays a critical role in stabilizing aqueous GO solution. Furthermore, the HCl-induced GO precipitation is a feasible approach to deposit GO sheets on a substrate as a Pt-free counter electrode for a dye-sensitized solar cell (DSSC), which exhibited 1.65% of power conversion efficiency. To explore broad and practical applications, large-scale synthesis with controllable integration of individual graphene sheets is essential. A novel strategy for the synthesis of graphene sheets with three-dimensional (3D) Honeycomb-like structure has been invented in this project based on a simple and novel chemical reaction (Li2O and CO to graphene and Li2CO3). The simultaneous formation of Li2CO3 with graphene not only can isolate graphene sheets from each other to prevent graphite formation during the process, but also determine the locally curved shape of graphene sheets. After removing Li2CO3, 3D graphene sheets with a honeycomb-like structure were obtained. This would be the first approach to synthesize 3D graphene sheets with a controllable shape. Furthermore, it has been demonstrated that the 3D Honeycomb-Structured Graphene (HSG) possesses excellent electrical conductivity and high catalytic activity. As a result, DSSCs with HSG counter electrodes exhibit energy conversion efficiency as high as 7.8%, which is comparable to that of an expensive noble Pt electrode.
Resumo:
Cancer is caused by a complex pattern of molecular perturbations. To understand the biology of cancer, it is thus important to look at the activation state of key proteins and signaling networks. The limited amount of available sample material from patients and the complexity of protein expression patterns make the use of traditional protein analysis methods particularly difficult. In addition, the only approach that is currently available for performing functional studies is the use of serial biopsies, which is limited by ethical constraints and patient acceptance. The goal of this work was to establish a 3-D ex vivo culture technique in combination with reverse-phase protein microarrays (RPPM) as a novel experimental tool for use in cancer research. The RPPM platform allows the parallel profiling of large numbers of protein analytes to determine their relative abundance and activation level. Cancer tissue and the respective corresponding normal tissue controls from patients with colorectal cancer were cultured ex vivo. At various time points, the cultured samples were processed into lysates and analyzed on RPPM to assess the expression of carcinoembryonic antigen (CEA) and 24 proteins involved in the regulation of apoptosis. The methodology displayed good robustness and low system noise. As a proof of concept, CEA expression was significantly higher in tumor compared with normal tissue (p<0.0001). The caspase 9 expression signal was lower in tumor tissue than in normal tissue (p<0.001). Cleaved Caspase 8 (p=0.014), Bad (p=0.007), Bim (p=0.007), p73 (p=0.005), PARP (p<0.001), and cleaved PARP (p=0.007) were differentially expressed in normal liver and normal colon tissue. We demonstrate here the feasibility of using RPPM technology with 3-D ex vivo cultured samples. This approach is useful for investigating complex patterns of protein expression and modification over time. It should allow functional proteomics in patient samples with various applications such as pharmacodynamic analyses in drug development.
Resumo:
Dynamic, unanticipated adaptation of running systems is of interest in a variety of situations, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications. In this paper we study a particular form of computational reflection, called unanticipated partial behavioral reflection, which is particularly well-suited for unanticipated adaptation of real-world systems. Our proposal combines the dynamicity of unanticipated reflection, i.e. reflection that does not require preparation of the code of any sort, and the selectivity and efficiency of partial behavioral reflection. First, we propose unanticipated partial behavioral reflection which enables the developer to precisely select the required reifications, to flexibly engineer the metalevel and to introduce the meta behavior dynamically. Second, we present a system supporting unanticipated partial behavioral reflection in Squeak Smalltalk, called Geppetto, and illustrate its use with a concrete example of a web application. Benchmarks validate the applicability of our proposal as an extension to the standard reflective abilities of Smalltalk.
Resumo:
In rapidly evolving domains such as Computer Assisted Orthopaedic Surgery (CAOS) emphasis is often put first on innovation and new functionality, rather than in developing the common infrastructure needed to support integration and reuse of these innovations. In fact, developing such an infrastructure is often considered to be a high-risk venture given the volatility of such a domain. We present CompAS, a method that exploits the very evolution of innovations in the domain to carry out the necessary quantitative and qualitative commonality and variability analysis, especially in the case of scarce system documentation. We show how our technique applies to the CAOS domain by using conference proceedings as a key source of information about the evolution of features in CAOS systems over a period of several years. We detect and classify evolution patterns to determine functional commonality and variability. We also identify non-functional requirements to help capture domain variability. We have validated our approach by evaluating the degree to which representative test systems can be covered by the common and variable features produced by our analysis.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Tropical wetlands are estimated to represent about 50% of the natural wetland methane (CH4) emissions and explain a large fraction of the observed CH4 variability on timescales ranging from glacial–interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This publication documents a first step in the development of a process-based model of CH4 emissions from tropical floodplains for global applications. For this purpose, the LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was slightly modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially explicit hydrology model PCR-GLOBWB. We introduced new plant functional types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote-sensing data sets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX reproduces the average magnitude of observed net CH4 flux densities for the Amazon Basin. However, the model does not reproduce the variability between sites or between years within a site. Unfortunately, site information is too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. Sensitivity analyses gave insights into the main drivers of floodplain CH4 emission and their associated uncertainties. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon-integrated emissions of 44.4 ± 4.8 Tg yr−1. Additionally, the LPX emissions are highly sensitive to vegetation distribution. Two simulations with the same mean PFT cover, but different spatial distributions of grasslands within the basin, modulated emissions by about 20%. Correcting the LPX-simulated NPP using MODIS reduces the Amazon emissions by 11.3%. Finally, due to an intrinsic limitation of LPX to account for seasonality in floodplain extent, the model failed to reproduce the full dynamics in CH4 emissions but we proposed solutions to this issue. The interannual variability (IAV) of the emissions increases by 90% if the IAV in floodplain extent is accounted for, but still remains lower than in most of the WETCHIMP models. While our model includes more mechanisms specific to tropical floodplains, we were unable to reduce the uncertainty in the magnitude of wetland CH4 emissions of the Amazon Basin. Our results helped identify and prioritize directions towards more accurate estimates of tropical CH4 emissions, and they stress the need for more research to constrain floodplain CH4 emissions and their temporal variability, even before including other fundamental mechanisms such as floating macrophytes or lateral water fluxes.
Resumo:
Recent findings in the field of biomaterials and tissue engineering provide evidence that surface immobilised growth factors display enhanced stability and induce prolonged function. Cell response can be regulated by material properties and at the site of interest. To this end, we developed scaffolds with covalently bound vascular endothelial growth factor (VEGF) and evaluated their mitogenic effect on endothelial cells in vitro. Nano- (254±133 nm) or micro-fibrous (4.0±0.4 μm) poly(ɛ-caprolactone) (PCL) non-wovens were produced by electrospinning and coated in a radio frequency (RF) plasma process to induce an oxygen functional hydrocarbon layer. Implemented carboxylic acid groups were converted into amine-reactive esters and covalently coupled to VEGF by forming stable amide bonds (standard EDC/NHS chemistry). Substrates were analysed by X-ray photoelectron spectroscopy (XPS), enzyme-linked immuno-assays (ELISA) and immunohistochemistry (anti-VEGF antibody and VEGF-R2 binding). Depending on the reaction conditions, immobilised VEGF was present at 127±47 ng to 941±199 ng per substrate (6mm diameter; concentrations of 4.5 ng mm(-2) or 33.3 ng mm(-2), respectively). Immunohistochemistry provided evidence for biological integrity of immobilised VEGF. Endothelial cell number of primary endothelial cells or immortalised endothelial cells were significantly enhanced on VEGF-functionalised scaffolds compared to native PCL scaffolds. This indicates a sustained activity of immobilised VEGF over a culture period of nine days. We present a versatile method for the fabrication of growth factor-loaded scaffolds at specific concentrations.
Resumo:
Our research goals are focused on the preparation of novel molecule-based materials that possess specifically designed properties in solution or in the solid state e.g. self-assembly, magnetism, conductivity and spin crossover phenomena. Most of our systems incorporate paramagnetic transition metal ions and the search for new molecule-based magnetic materials is a prominent theme. Specific areas of research include the preparation and study of oxalate based 2D and 3D magnets, probing the versatility of octacyanometalate building blocks as precursors for new molecular magnets, and the preparation of new tetrathiafulvalene (TIF) derivatives for applications in molecular and supramolecular chemistry.
Resumo:
High precision in motor skill performance, in both sport and other domains (e.g. surgery and aviation), requires the efficient coupling of perceptual inputs (e.g. vision) and motor actions. A particular gaze strategy, which has received much attention within the literature, has been shown to predict both inter- (expert vs. novice) and intra-individual (successful vs. unsuccessful) motor performance (see Vine et al., 2014). Vickers (1996) labelled this phenomenon the quiet eye (QE) which is defined as the final fixation before the initiation of the crucial phase of movement. While the positive influence of a long QE on accuracy has been revealed in a range of different motor skills, there is a growing number of studies suggesting that the relationship between QE and motor performance is not entirely monotonic. This raises interesting questions regarding the QE’s purview, and the theoretical approaches explaining its functionality. This talk aims to present an overview of the issues described above, and to discuss contemporary research and experimental approaches to examining the QE phenomenon. In the first part of the talk Dr. Vine will provide a brief and critical review of the literature, highlighting recent empirical advancements and potential directions for future research. In the second part, Dr. Klostermann will communicate three different theoretical approaches to explain the relationship between QE and motor performance. Drawing upon aspects of all three of these theoretical approaches, a functional inhibition role for the QE (related to movement parameterisation) will be proposed.
Resumo:
Here we report the first study on the electrochemical energy storage application of a surface-immobilized ruthenium complex multilayer thin film with anion storage capability. We employed a novel dinuclear ruthenium complex with tetrapodal anchoring groups to build well-ordered redox-active multilayer coatings on an indium tin oxide (ITO) surface using a layer-by-layer self-assembly process. Cyclic voltammetry (CV), UV-Visible (UV-Vis) and Raman spectroscopy showed a linear increase of peak current, absorbance and Raman intensities, respectively with the number of layers. These results indicate the formation of well-ordered multilayers of the ruthenium complex on ITO, which is further supported by the X-ray photoelectron spectroscopy analysis. The thickness of the layers can be controlled with nanometer precision. In particular, the thickest layer studied (65 molecular layers and approx. 120 nm thick) demonstrated fast electrochemical oxidation/reduction, indicating a very low attenuation of the charge transfer within the multilayer. In situ-UV-Vis and resonance Raman spectroscopy results demonstrated the reversible electrochromic/redox behavior of the ruthenium complex multilayered films on ITO with respect to the electrode potential, which is an ideal prerequisite for e.g. smart electrochemical energy storage applications. Galvanostatic charge–discharge experiments demonstrated a pseudocapacitor behavior of the multilayer film with a good specific capacitance of 92.2 F g−1 at a current density of 10 μA cm−2 and an excellent cycling stability. As demonstrated in our prototypical experiments, the fine control of physicochemical properties at nanometer scale, relatively good stability of layers under ambient conditions makes the multilayer coatings of this type an excellent material for e.g. electrochemical energy storage, as interlayers in inverted bulk heterojunction solar cell applications and as functional components in molecular electronics applications.
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
En la actualidad, el seguimiento de la dinámica de los procesos medio ambientales está considerado como un punto de gran interés en el campo medioambiental. La cobertura espacio temporal de los datos de teledetección proporciona información continua con una alta frecuencia temporal, permitiendo el análisis de la evolución de los ecosistemas desde diferentes escalas espacio-temporales. Aunque el valor de la teledetección ha sido ampliamente probado, en la actualidad solo existe un número reducido de metodologías que permiten su análisis de una forma cuantitativa. En la presente tesis se propone un esquema de trabajo para explotar las series temporales de datos de teledetección, basado en la combinación del análisis estadístico de series de tiempo y la fenometría. El objetivo principal es demostrar el uso de las series temporales de datos de teledetección para analizar la dinámica de variables medio ambientales de una forma cuantitativa. Los objetivos específicos son: (1) evaluar dichas variables medio ambientales y (2) desarrollar modelos empíricos para predecir su comportamiento futuro. Estos objetivos se materializan en cuatro aplicaciones cuyos objetivos específicos son: (1) evaluar y cartografiar estados fenológicos del cultivo del algodón mediante análisis espectral y fenometría, (2) evaluar y modelizar la estacionalidad de incendios forestales en dos regiones bioclimáticas mediante modelos dinámicos, (3) predecir el riesgo de incendios forestales a nivel pixel utilizando modelos dinámicos y (4) evaluar el funcionamiento de la vegetación en base a la autocorrelación temporal y la fenometría. Los resultados de esta tesis muestran la utilidad del ajuste de funciones para modelizar los índices espectrales AS1 y AS2. Los parámetros fenológicos derivados del ajuste de funciones permiten la identificación de distintos estados fenológicos del cultivo del algodón. El análisis espectral ha demostrado, de una forma cuantitativa, la presencia de un ciclo en el índice AS2 y de dos ciclos en el AS1 así como el comportamiento unimodal y bimodal de la estacionalidad de incendios en las regiones mediterránea y templada respectivamente. Modelos autorregresivos han sido utilizados para caracterizar la dinámica de la estacionalidad de incendios y para predecir de una forma muy precisa el riesgo de incendios forestales a nivel pixel. Ha sido demostrada la utilidad de la autocorrelación temporal para definir y caracterizar el funcionamiento de la vegetación a nivel pixel. Finalmente el concepto “Optical Functional Type” ha sido definido, donde se propone que los pixeles deberían ser considerados como unidades temporales y analizados en función de su dinámica temporal. ix SUMMARY A good understanding of land surface processes is considered as a key subject in environmental sciences. The spatial-temporal coverage of remote sensing data provides continuous observations with a high temporal frequency allowing the assessment of ecosystem evolution at different temporal and spatial scales. Although the value of remote sensing time series has been firmly proved, only few time series methods have been developed for analyzing this data in a quantitative and continuous manner. In the present dissertation a working framework to exploit Remote Sensing time series is proposed based on the combination of Time Series Analysis and phenometric approach. The main goal is to demonstrate the use of remote sensing time series to analyze quantitatively environmental variable dynamics. The specific objectives are (1) to assess environmental variables based on remote sensing time series and (2) to develop empirical models to forecast environmental variables. These objectives have been achieved in four applications which specific objectives are (1) assessing and mapping cotton crop phenological stages using spectral and phenometric analyses, (2) assessing and modeling fire seasonality in two different ecoregions by dynamic models, (3) forecasting forest fire risk on a pixel basis by dynamic models, and (4) assessing vegetation functioning based on temporal autocorrelation and phenometric analysis. The results of this dissertation show the usefulness of function fitting procedures to model AS1 and AS2. Phenometrics derived from function fitting procedure makes it possible to identify cotton crop phenological stages. Spectral analysis has demonstrated quantitatively the presence of one cycle in AS2 and two in AS1 and the unimodal and bimodal behaviour of fire seasonality in the Mediterranean and temperate ecoregions respectively. Autoregressive models has been used to characterize the dynamics of fire seasonality in two ecoregions and to forecasts accurately fire risk on a pixel basis. The usefulness of temporal autocorrelation to define and characterized land surface functioning has been demonstrated. And finally the “Optical Functional Types” concept has been proposed, in this approach pixels could be as temporal unities based on its temporal dynamics or functioning.
Resumo:
A new three-dimensional analytic optics design method is presented that enables the coupling of three ray sets with only two free-form lens surfaces. Closely related to the Simultaneous Multiple Surface method in three dimensions (SMS3D), it is derived directly from Fermat?s principle, leading to multiple sets of functional differential equations. The general solution of these equations makes it possible to calculate more than 80 coefficients for each implicit surface function. Ray tracing simulations of these free-form lenses demonstrate superior imaging performance for applications with high aspect ratio, compared to conventional rotational symmetric systems.
Resumo:
Information integration is a very important topic. Reusing the knowledge and having common and exchangeable representations have been an active research topic in process systems engineering. In this paper we deal with information integration in two different ways, the first one sharing knowledge between different heterogeneous applications and the second one integrating two different (but complementary) types of knowledge: functional and structural. A new architecture to integrate these representation and use for several purposes is presented in this paper.