927 resultados para Random finite set theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topology optimization consists in finding the spatial distribution of a given total volume of material for the resulting structure to have some optimal property, for instance, maximization of structural stiffness or maximization of the fundamental eigenfrequency. In this paper a Genetic Algorithm (GA) employing a representation method based on trees is developed to generate initial feasible individuals that remain feasible upon crossover and mutation and as such do not require any repairing operator to ensure feasibility. Several application examples are studied involving the topology optimization of structures where the objective functions is the maximization of the stiffness and the maximization of the first and the second eigenfrequencies of a plate, all cases having a prescribed material volume constraint.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a biased random-key genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. Active schedules are constructed using a priority-rule heuristic in which the priorities of the activities are defined by the genetic algorithm. A forward-backward improvement procedure is applied to all solutions. The chromosomes supplied by the genetic algorithm are adjusted to reflect the solutions obtained by the improvement procedure. The heuristic is tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dual-phase functionally graded materials are a particular type of composite materials whose properties are tailored to vary continuously, depending on its two constituent's composition distribution, and which use is increasing on the most diverse application fields. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, exactly because of this continuous properties variation characteristic, which enables among other advantages smoother stresses distribution profile. In this paper we study the influence of different homogenization schemes, namely the schemes due to Voigt, Hashin-Shtrikman and Mod-Tanaka, which can be used to obtain bounds estimates for the material properties of particulate composite structures. To achieve this goal we also use a set of finite element models based on higher order shear deformation theories and also on first order theory. From the studies carried out, on linear static analyses and on free vibration analyses, it is shown that the bounds estimates are as important as the deformation kinematics basis assumed to analyse these types of multifunctional structures. Concerning to the homogenization schemes studied, it is shown that Mori-Tanaka and Hashin-Shtrikman estimates lead to less conservative results when compared to Voigt rule of mixtures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

8th International Workshop on Multiple Access Communications (MACOM2015), Helsinki, Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo final deste estudo é contribuir para a discussão sobre qual a medida em que conceitos semânticos e discursivos estão sintaticamente codificados. Mais especificamente, investiga-se se existe alguma correlação consistente entre alguns aspetos interpretativos e sintáticos de quatro construções clivadas do Português Europeu, e como se deve dar conta teoricamente destas potenciais correlações. As clivadas consideradas são as clivadas canónicas, as pseudoclivadas, as clivadas de é que e as clivadas de SER. Sintaticamente podemos distinguir dois tipos: clivadas bioracionais (canónicas e pseudoclivadas) e clivadas mono-oracionais (clivadas de é que e de SER). Todas as estruturas têm um constituinte clivado focalizado que pode constituir tanto um foco informacional como um foco contrastivo, e uma oração clivada que introduz uma pressuposição existencial. Adicionalmente, o constituinte clivado identifica exaustivamente uma posição vazia na oração clivada. Adota-se a semântica alternativa para o foco (Rooth 1985), segundo a qual o foco entoacional contribui uniformemente um conjunto de alternativas na Forma Lógica. Regras pragmáticas operando neste conjunto dão origem a duas implicaturas que podem ser suspensas: pressuposição existencial e exaustividade. Dado que as clivadas de é que e as de SER têm a mesma interpretação que orações não-clivadas, conclui-se que a sua estrutura sintática particular não contribui para estas propriedades interpretativas. Em contrapartida, as clivadas bioracionais, que são orações copulativas especificacionais, têm uma presuposição existencial e uma interpretação exaustiva que não pode ser suspensa, tal como as orações especificacionais não-clivadas. Argumenta-se que isto se deve ao facto de o constituinte clivado identificar uma variável introduzida por uma descrição definida. Demonstra-se que a oração clivada, uma relativa em posição de complemento de um determinador definido nas clivadas canónicas e uma relativa livre nas pseudoclivadas, tem a mesma denotação que um DP definido, e portanto tem uma pressuposição existencial inerente. A interpretação exaustiva deve-se à relação identificacional entre o constituinte clivado e a descrição definida. Além disso, defende-se que em Português Europeu um traço de foco não desencadeia movimento-A’ para um FocP especializado. Os constituintes focalizados movem-se antes por razões independentes do foco. Isto é confirmado pelo facto de apenas o constituinte clivado das clivadas de é que ter propriedades de movimento A’, os outros parecem estar in situ. Propõe-se que o constituinte clivado das clivadas de é que é um tópico com um traço de foco que se move para um TopP. Esta análise dá conta da existência de restrições discursivas semelhantes para tópicos não focalizados e para o constituinte clivado das clivadas de é que. O traço quantificacional de foco arrastado pela topicalização dá origem a efeitos de intervenção, causando a não-recursividade do foco na periferia esquerda e a sua incompatibilidade com movimento de outros constituintes com traços quantificacionais. A análise prediz as restrições de encaixe observadas para as clivadas de é que. Finalmente, desenvolve-se uma análise sintática das clivadas de SER que aproxima estas estruturas das estruturas com partículas de foco. Propõe-se que a cópula é um operador sensível ao foco que é merged juntamente com o constituinte clivado. As restrições distribucionais da cópula devem-se a requisitos selecionais de núcleos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the longitudinal and transverse spin dynamical structure factors of the spin-1/2 XXX chain at finite magnetic field h, focusing in particular on the singularities at excitation energies in the vicinity of the lower thresholds. While the static properties of the model can be studied within a Fermi-liquid like description in terms of pseudoparticles, our derivation of the dynamical properties relies on the introduction of a form of the ‘pseudofermion dynamical theory’ (PDT) of the 1D Hubbard model suitably modified for the spin-only XXX chain and other models with two pseudoparticle Fermi points. Specifically, we derive the exact momentum and spin-density dependences of the exponents ζτ(k) controlling the singularities for both the longitudinal  and transverse (τ = t) dynamical structure factors for the whole momentum range  , in the thermodynamic limit. This requires the numerical solution of the integral equations that define the phase shifts in these exponents expressions. We discuss the relation to neutron scattering and suggest new experiments on spin-chain compounds using a carefully oriented crystal to test our predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified version of the metallic-phase pseudofermion dynamical theory (PDT) of the 1D Hubbard model is introduced for the spin dynamical correlation functions of the half-filled 1D Hubbard model Mott– Hubbard phase. The Mott–Hubbard insulator phase PDT is applied to the study of the model longitudinal and transverse spin dynamical structure factors at finite magnetic field h, focusing in particular on the sin- gularities at excitation energies in the vicinity of the lower thresholds. The relation of our theoretical results to both condensed-matter and ultra-cold atom systems is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the low frequency absorption cross section of minimally coupled massless scalar fields by different kinds of charged static black holes in string theory, namely the D1–D5 system in d=5 and a four dimensional dyonic four-charged black hole. In each case we show that this cross section always has the form of some parameter of the solution divided by the black hole Hawking temperature. We also verify in each case that, despite its explicit temperature dependence, such quotient is finite in the extremal limit, giving a well defined cross section. We show that this precise explicit temperature dependence also arises in the same cross section for black holes with string \alpha' corrections: it is actually induced by them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we describe the results of some computational explorations in Thompson's group F. We describe experiments to estimate the cogrowth of F with respect to its standard finite generating set, designed to address the subtle and difficult question whether or not Thompson's group is amenable. We also describe experiments to estimate the exponential growth rate of F and the rate of escape of symmetric random walks with respect to the standard generating set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma (1996) studied the random order mechanism, a matching mechanism suggested by Roth and Vande Vate (1990) for marriage markets. By means of an example he showed that the random order mechanism does not always reach all stable matchings. Although Ma's (1996) result is true, we show that the probability distribution he presented - and therefore the proof of his Claim 2 - is not correct. The mistake in the calculations by Ma (1996) is due to the fact that even though the example looks very symmetric, some of the calculations are not as ''symmetric.''

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To a finite graph there corresponds a free partially commutative group: with the given graph as commutation graph. In this paper we construct an orthogonality theory for graphs and their corresponding free partially commutative groups. The theory developed here provides tools for the study of the structure of partially commutative groups, their universal theory and automorphism groups. In particular the theory is applied in this paper to the centraliser lattice of such groups.