949 resultados para LMS Structure, Ternary Filtering, Algorithm
Resumo:
A thermal evaporation method developed in the research group enables to grow and design several morphologies of semiconducting oxide nanostructures, such as Ga_2O_3, GeO_2 or Sb_2O_3, among others, and some ternary oxide compounds (ZnGa_2O_4, Zn_2GeO_4). In order to tailor physical properties, a successful doping of these nanostructures is required. However, for nanostructured materials, doping may affect not only their physical properties, but also their morphology during the thermal growth process. In this paper, we will show some examples of how the addition of impurities may result into the formation of complex structures, or changes in the structural phase of the material. In particular, we will consider the addition of Sn and Cr impurities into the precursors used to grow Ga_2O_3, Zn_2GeO_4 and Sb_2O_3 nanowires, nanorods or complex nanostructures, such as crossing wires or hierarchical structures. Structural and optical properties were assessed by electron microscopy (SEM and TEM), confocal microscopy, spatially resolved cathodoluminescence (CL), photoluminescence, and Raman spectroscopies. The growth mechanisms, the luminescence bands and the optical confinement in the obtained oxide nanostructures will be discussed. In particular, some of these nanostructures have been found to be of interest as optical microcavities. These nanomaterials may have applications in optical sensing and energy devices.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
Cette thèse concerne la modélisation des interactions fluide-structure et les méthodes numériques qui s’y rattachent. De ce fait, la thèse est divisée en deux parties. La première partie concerne l’étude des interactions fluide-structure par la méthode des domaines fictifs. Dans cette contribution, le fluide est incompressible et laminaire et la structure est considérée rigide, qu’elle soit immobile ou en mouvement. Les outils que nous avons développés comportent la mise en oeuvre d’un algorithme fiable de résolution qui intégrera les deux domaines (fluide et solide) dans une formulation mixte. L’algorithme est basé sur des techniques de raffinement local adaptatif des maillages utilisés permettant de mieux séparer les éléments du milieu fluide de ceux du solide que ce soit en 2D ou en 3D. La seconde partie est l’étude des interactions mécaniques entre une structure flexible et un fluide incompressible. Dans cette contribution, nous proposons et analysons des méthodes numériques partitionnées pour la simulation de phénomènes d’interaction fluide-structure (IFS). Nous avons adopté à cet effet, la méthode dite «arbitrary Lagrangian-Eulerian» (ALE). La résolution fluide est effectuée itérativement à l’aide d’un schéma de type projection et la structure est modélisée par des modèles hyper élastiques en grandes déformations. Nous avons développé de nouvelles méthodes de mouvement de maillages pour aboutir à de grandes déformations de la structure. Enfin, une stratégie de complexification du problème d’IFS a été définie. La modélisation de la turbulence et des écoulements à surfaces libres ont été introduites et couplées à la résolution des équations de Navier-Stokes. Différentes simulations numériques sont présentées pour illustrer l’efficacité et la robustesse de l’algorithme. Les résultats numériques présentés attestent de la validité et l’efficacité des méthodes numériques développées.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
An indirect genetic algorithm for the non-unicost set covering problem is presented. The algorithm is a two-stage meta-heuristic, which in the past was successfully applied to similar multiple-choice optimisation problems. The two stages of the algorithm are an ‘indirect’ genetic algorithm and a decoder routine. First, the solutions to the problem are encoded as permutations of the rows to be covered, which are subsequently ordered by the genetic algorithm. Fitness assignment is handled by the decoder, which transforms the permutations into actual solutions to the set covering problem. This is done by exploiting both problem structure and problem specific information. However, flexibility is retained by a self-adjusting element within the decoder, which allows adjustments to both the data and to stages within the search process. Computational results are presented.
Resumo:
In the north Atlantic subtropical gyre, the oceanic vertical structure of density is characterized by a region of rapid increase with depth. This layer is called the permanent pycnocline. The permanent pycnocline is found below a surface mode water ,which are ventilated every winter when penetrated locally by the mixed layer. Assessing the structure and variability of the permanent pycnocline is of a major interest in the understanding of the climate system because the pycnocline layer delimits important heat and anthropogenic reservoir. Moreover, the heat content structure translate into changes in the large scale stratification feature, such as the permanent pycnocline. We developed a new objective algorithm for the characterization of the large scale structure of the permanent pycnocline (OAC-P). Argo data have been used with OAC-P to provide a detailed description of the mean structure of the North-Atlantic subtropical pycnocline (e.g.: depth, thickness, temperature, salinity, density, potential vorticity). Results reveal a surprisingly complex structure with inhomogeneous properties. While the classical bowl shape of the pycnocline depth is captured, much more complex pycnocline structure emerges at the regional scale. In the southern recirculation gyre of the Gulf Stream Extension, the pycnocline is deep, thick, the maximum of stratification is found in the middle on the layer and follow an isopycnal surface. But local processes influence and modify this textbook description and the pycnocline is characterized by a vertically asymmetric structure and gradients in thermohaline properties. T/S distribution along the permanent pycnocline depth is complex and reveals a diversity of water masses resulting from mixing of different source waters. We will present the observed mean structure of the North-Atlantic subtropical permanent pycnocline and relate it to physical processes that constraint it.
Resumo:
When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid–liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.
Resumo:
Nearest neighbour collaborative filtering (NNCF) algorithms are commonly used in multimedia recommender systems to suggest media items based on the ratings of users with similar preferences. However, the prediction accuracy of NNCF algorithms is affected by the reduced number of items – the subset of items co-rated by both users – typically used to determine the similarity between pairs of users. In this paper, we propose a different approach, which substantially enhances the accuracy of the neighbour selection process – a user-based CF (UbCF) with semantic neighbour discovery (SND). Our neighbour discovery methodology, which assesses pairs of users by taking into account all the items rated at least by one of the users instead of just the set of co-rated items, semantically enriches this enlarged set of items using linked data and, finally, applies the Collinearity and Proximity Similarity metric (CPS), which combines the cosine similarity with Chebyschev distance dissimilarity metric. We tested the proposed SND against the Pearson Correlation neighbour discovery algorithm off-line, using the HetRec data set, and the results show a clear improvement in terms of accuracy and execution time for the predicted recommendations.
Resumo:
We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.
Resumo:
Themarine environment seems, at first sight, to be a homogeneousmediumlacking barriers to species dispersal. Nevertheless, populations of marine species show varying levels of gene flow and population differentiation, so barriers to gene flow can often be detected. Weaimto elucidate the role of oceanographical factors ingenerating connectivity among populations shaping the phylogeographical patterns in the marine realm, which is not only a topic of considerable interest for understanding the evolution ofmarine biodiversity but also formanagement and conservation of marine life. For this proposal,we investigate the genetic structure and connectivity between continental and insular populations ofwhite seabreamin North East Atlantic (NEA) and Mediterranean Sea (MS) aswell as the influence of historical and contemporary factors in this scenario using mitochondrial (cytochrome b) and nuclear (a set of 9 microsatellite) molecular markers. Azores population appeared genetically differentiated in a single cluster using Structure analysis. This result was corroborated by Principal Component Analysis (PCA) and Monmonier algorithm which suggested a boundary to gene flow, isolating this locality. Azorean population also shows the highest significant values of FST and genetic distances for both molecular markers (microsatellites and mtDNA). We suggest that the breakdown of effective genetic exchange between Azores and the others' samples could be explained simultaneously by hydrographic (deep water) and hydrodynamic (isolating current regimes) factors acting as barriers to the free dispersal of white seabream(adults and larvae) and by historical factors which could be favoured for the survival of Azorean white seabream population at the last glaciation. Mediterranean islands show similar genetic diversity to the neighbouring continental samples and nonsignificant genetic differences. Proximity to continental coasts and the current system could promote an optimal larval dispersion among Mediterranean islands (Mallorca and Castellamare) and coasts with high gene flow.
Resumo:
In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.
Resumo:
Valproic acid (VPA) and trichostatin A (TSA) are known histone deacetylase inhibitors (HDACIs) with epigenetic activity that affect chromatin supra-organization, nuclear architecture, and cellular proliferation, particularly in tumor cells. In this study, chromatin remodeling with effects extending to heterochromatic areas was investigated by image analysis in non-transformed NIH 3T3 cells treated for different periods with different doses of VPA and TSA under conditions that indicated no loss of cell viability. Image analysis revealed chromatin decondensation that affected not only euchromatin but also heterochromatin, concomitant with a decreased activity of histone deacetylases and a general increase in histone H3 acetylation. Heterochromatin protein 1-α (HP1-α), identified immunocytochemically, was depleted from the pericentromeric heterochromatin following exposure to both HDACIs. Drastic changes affecting cell proliferation and micronucleation but not alteration in CCND2 expression and in ratios of Bcl-2/Bax expression and cell death occurred following a 48-h exposure of the NIH 3T3 cells particularly in response to higher doses of VPA. Our results demonstrated that even low doses of VPA (0.05 mM) and TSA (10 ng/ml) treatments for 1 h can affect chromatin structure, including that of the heterochromatin areas, in non-transformed cells. HP1-α depletion, probably related to histone demethylation at H3K9me3, in addition to the effect of VPA and TSA on histone H3 acetylation, is induced on NIH 3T3 cells. Despite these facts, alterations in cell proliferation and micronucleation, possibly depending on mitotic spindle defects, require a longer exposure to higher doses of VPA and TSA.