12 resultados para Semi-Analytic Solution
em Universidad Politécnica de Madrid
Resumo:
In this work, a new two-dimensional optics design method is proposed that enables the coupling of three ray sets with two lens surfaces. The method is especially important for optical systems designed for wide field of view and with clearly separated optical surfaces. Fermat’s principle is used to deduce a set of functional differential equations fully describing the entire optical system. The presented general analytic solution makes it possible to calculate the lens profiles. Ray tracing results for calculated 15th order Taylor polynomials describing the lens profiles demonstrate excellent imaging performance and the versatility of this new analytic design method.
Resumo:
The aim of this paper Is lo discuss the influence of the selection of the interpolation kernel in the accuracy of the modeling of the internal viscous dissipation in Tree surface Hows, Simulations corresponding to a standing wave* for which an analytic solution available, are presented. Wendland and renormalized Gaussian kernels are considered. The differences in the flow pattern* and Internal dissipation mechanisms are documented for a range of Reynolds numbers. It is shown that the simulations with Wendland kernels replicate the dissipation mechanisms more accurately than those with a renormalized Gaussian kernel. Although some explanations are hinted we have Tailed to clarify which the core structural reasons for Mich differences are*
Resumo:
We derive a semi-analytic formulation that permits to study the long-term dynamics of fast-rotating inert tethers around planetary satellites. Since space tethers are extensive bodies they generate non-keplerian gravitational forces which depend solely on their mass geometry and attitude, that can be exploited for controlling science orbits. We conclude that rotating tethers modify the geometry of frozen orbits, allowing for lower eccentricity frozen orbits for a wide range of orbital inclination, where the length of the tether becomes a new parameter that the mission analyst may use to shape frozen orbits to tighter operational constraints.
Resumo:
We derive a semi-analytic formulation that enables the study of the long-term dynamics of fast-rotating inert tethers around planetary satellites. These equations take into account the coupling between the translational and rotational motion, which has a non-negligible impact on the dynamics, as the orbital motion of the tether center of mass strongly depends on the tether plane of rotation and its spin rate, and vice-versa. We use these governing equations to explore the effects of this coupling on the dynamics, the lifetime of frozen orbits and the precession of the plane of rotation of the tether.
Resumo:
A numerical method to analyse the stability of transverse galloping based on experimental measurements, as an alternative method to polynomial fitting of the transverse force coefficient Cz, is proposed in this paper. The Glauert–Den Hartog criterion is used to determine the region of angles of attack (pitch angles) prone to present galloping. An analytic solution (based on a polynomial curve of Cz) is used to validate the method and to evaluate the discretization errors. Several bodies (of biconvex, D-shape and rhomboidal cross sections) have been tested in a wind tunnel and the stability of the galloping region has been analysed with the new method. An algorithm to determine the pitch angle of the body that allows the maximum value of the kinetic energy of the flow to be extracted is presented.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
In this work, a new two-dimensional analytic optics design method is presented that enables the coupling of three ray sets with two lens profiles. This method is particularly promising for optical systems designed for wide field of view and with clearly separated optical surfaces. However, this coupling can only be achieved if different ray sets will use different portions of the second lens profile. Based on a very basic example of a single thick lens, the Simultaneous Multiple Surfaces design method in two dimensions (SMS2D) will help to provide a better understanding of the practical implications on the design process by an increased lens thickness and a wider field of view. Fermat?s principle is used to deduce a set of functional differential equations fully describing the entire optical system. The transformation of these functional differential equations into an algebraic linear system of equations allows the successive calculation of the Taylor series coefficients up to an arbitrary order. The evaluation of the solution space reveals the wide range of possible lens configurations covered by this analytic design method. Ray tracing analysis for calculated 20th order Taylor polynomials demonstrate excellent performance and the versatility of this new analytical optics design concept.
Resumo:
A new three-dimensional analytic optics design method is presented that enables the coupling of three ray sets with only two free-form lens surfaces. Closely related to the Simultaneous Multiple Surface method in three dimensions (SMS3D), it is derived directly from Fermat?s principle, leading to multiple sets of functional differential equations. The general solution of these equations makes it possible to calculate more than 80 coefficients for each implicit surface function. Ray tracing simulations of these free-form lenses demonstrate superior imaging performance for applications with high aspect ratio, compared to conventional rotational symmetric systems.
Resumo:
When users face a certain problem needing a product, service, or action to solve it, selecting the best alternative among them can be a dicult task due to the uncertainty of their quality. This is especially the case in the domains where users do not have an expertise, like for example in Software Engineering. Multiple criteria decision making (MCDM) methods are methods that help making better decisions when facing the complex problem of selecting the best solution among a group of alternatives that can be compared according to different conflicting criteria. In MCDM problems, alternatives represent concrete products, services or actions that will help in achieving a goal, while criteria represent the characteristics of these alternatives that are important for making a decision.
Resumo:
We present a quasi-monotone semi-Lagrangian particle level set (QMSL-PLS) method for moving interfaces. The QMSL method is a blend of first order monotone and second order semi-Lagrangian methods. The QMSL-PLS method is easy to implement, efficient, and well adapted for unstructured, either simplicial or hexahedral, meshes. We prove that it is unconditionally stable in the maximum discrete norm, � · �h,∞, and the error analysis shows that when the level set solution u(t) is in the Sobolev space Wr+1,∞(D), r ≥ 0, the convergence in the maximum norm is of the form (KT/Δt)min(1,Δt � v �h,∞ /h)((1 − α)hp + hq), p = min(2, r + 1), and q = min(3, r + 1),where v is a velocity. This means that at high CFL numbers, that is, when Δt > h, the error is O( (1−α)hp+hq) Δt ), whereas at CFL numbers less than 1, the error is O((1 − α)hp−1 + hq−1)). We have tested our method with satisfactory results in benchmark problems such as the Zalesak’s slotted disk, the single vortex flow, and the rising bubble.
Resumo:
The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen?s ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo?Stiefel methods.
Integral energy behaviour of photovoltaic semi-transparent glazing elements for building integration
Resumo:
La hipótesis general que esta tesis quiere demostrar es que la integración arquitectónica de sistemas fotovoltaicos semitransparentes (STPV) puede contribuir a mejorar la eficiencia energética de los edificios. Por lo tanto, la investigación se centra en el desarrollo de una metodología capaz de cuantificar la reducción de la demanda energética del edificio proporcionada por estas novedosas soluciones constructivas. Al mismo tiempo, los parámetros de diseño de las soluciones STPV se han analizado para establecer cuales presentan el mayor impacto sobre el balance energético global del edificio y por lo tanto tienen que ser cuidadosamente definidos a la hora de optimizar el comportamiento energético del mismo. A la luz de estos objetivos, la metodología de estudio se ha centrado en tres puntos principales: Caracterizar el comportamiento energético global de sistemas STPV en condiciones de operación realistas, similares a las que se darían en un sistema real; Caracterizar el comportamiento energético global de sistemas STPV en condiciones controladas, con el objetivo de estudiar la variación del comportamiento del los elementos en función de parámetro de diseño y operación; Evaluar el potencial de ahorro energético global de los sistemas STPV en comparación con soluciones acristaladas convencionales al variar de las condiciones de contorno constituidas por los parámetros de diseño (como el grado de transparencia), las características arquitectónicas (como el ratio entre superficie acristalada y superficie opaca en la fachada del edificio) y las condiciones climáticas (cubriendo en particular la climatología europea). En síntesis, este trabajo intenta contribuir a comprender la interacción que existe entre los sistemas STPV y el edificio, proporcionando tanto a los fabricantes de los componentes como a los profesionales de la construcción información valiosa sobre el potencial de ahorro energético asociado a estos nuevos sistemas constructivos. Asimismo el estudio define los parámetros de diseño adecuados para lograr soluciones eficientes tanto en proyectos nuevos como de rehabilitación. ABSTRACT The general hypothesis this work seeks to demonstrate is that the architectural integration of Semi-Transparent Photovoltaic (STPV) systems can contribute to improving the energy efficiency of buildings. Accordingly, the research has focused on developing a methodology able to quantify the building energy demand reduction provided by these novel constructive solutions. At the same time, the design parameters of the STPV solution have been analysed to establish which of them have the greatest impact on the global energy balance of the building, and therefore which have to be carefully defined in order to optimize the building operation. In the light of these goals, the study methodology has focused on three main points: To characterise the global energy behaviour of STPV systems in realistic operating conditions, similar to those in which a real system will operate; To characterise the global energy behaviour of STPV systems in controlled conditions in order to study how the performance varies depending on the design and operating parameters; To assess the global energy saving potential of STPV systems in comparison with conventional glazing solutions by varying the boundary conditions, including design parameters (such as the degree of transparency), architectural characteristics (such as the Window to Wall Ratio) and climatic conditions (covering the European climatic conditions). In summary, this work has sought to contribute to the understanding of the interaction between STPV systems and the building, providing both components manufacturers and construction technicians, valuable information on the energy savings potential of these new construction systems and defining the appropriate design parameters to achieve efficient solutions in both new and retrofitting projects.