16 resultados para matrix functions
em Universitat de Girona, Spain
Resumo:
The occurrence of negative values for Fukui functions was studied through the electronegativity equalization method. Using algebraic relations between Fukui functions and different other conceptual DFT quantities on the one hand and the hardness matrix on the other hand, expressions were obtained for Fukui functions for several archetypical small molecules. Based on EEM calculations for large molecular sets, no negative Fukui functions were found
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
A novel technique for estimating the rank of the trajectory matrix in the local subspace affinity (LSA) motion segmentation framework is presented. This new rank estimation is based on the relationship between the estimated rank of the trajectory matrix and the affinity matrix built with LSA. The result is an enhanced model selection technique for trajectory matrix rank estimation by which it is possible to automate LSA, without requiring any a priori knowledge, and to improve the final segmentation
Resumo:
The H∞ synchronization problem of the master and slave structure of a second-order neutral master-slave systems with time-varying delays is presented in this paper. Delay-dependent sufficient conditions for the design of a delayed output-feedback control are given by Lyapunov-Krasovskii method in terms of a linear matrix inequality (LMI). A controller, which guarantees H∞ synchronization of the master and slave structure using some free weighting matrices, is then developed. A numerical example has been given to show the effectiveness of the method
Resumo:
In this paper, we address this problem through the design of a semiactive controller based on the mixed H2/H∞ control theory. The vibrations caused by the seismic motions are mitigated by a semiactive damper installed in the bottom of the structure. It is meant by semiactive damper, a device that absorbs but cannot inject energy into the system. Sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller that guarantees asymptotic stability and a mixed H2/H∞ performance is then developed. An algorithm is proposed to handle the semiactive nature of the actuator. The performance of the controller is experimentally evaluated in a real-time hybrid testing facility that consists of a physical specimen (a small-scale magnetorheological damper) and a numerical model (a large-scale three-story building)
Resumo:
We present algorithms for computing approximate distance functions and shortest paths from a generalized source (point, segment, polygonal chain or polygonal region) on a weighted non-convex polyhedral surface in which obstacles (represented by polygonal chains or polygons) are allowed. We also describe an algorithm for discretizing, by using graphics hardware capabilities, distance functions. Finally, we present algorithms for computing discrete k-order Voronoi diagrams
Resumo:
Different procedures to obtain atom condensed Fukui functions are described. It is shown how the resulting values may differ depending on the exact approach to atom condensed Fukui functions. The condensed Fukui function can be computed using either the fragment of molecular response approach or the response of molecular fragment approach. The two approaches are nonequivalent; only the latter approach corresponds in general with a population difference expression. The Mulliken approach does not depend on the approach taken but has some computational drawbacks. The different resulting expressions are tested for a wide set of molecules. In practice one must make seemingly arbitrary choices about how to compute condensed Fukui functions, which suggests questioning the role of these indicators in conceptual density-functional theory
Resumo:
A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants
Resumo:
Linear response functions are implemented for a vibrational configuration interaction state allowing accurate analytical calculations of pure vibrational contributions to dynamical polarizabilities. Sample calculations are presented for the pure vibrational contributions to the polarizabilities of water and formaldehyde. We discuss the convergence of the results with respect to various details of the vibrational wave function description as well as the potential and property surfaces. We also analyze the frequency dependence of the linear response function and the effect of accounting phenomenologically for the finite lifetime of the excited vibrational states. Finally, we compare the analytical response approach to a sum-over-states approach
Resumo:
Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31 G* and extended 6-31++ G* basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods: Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results: We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion: Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia
Resumo:
La nostra investigació s'inscriu en la concepció dinàmica de la intel·ligència, i concretament en el processos que configuren el processament cerebral en el Model d'integració de la informació descrit per Das, Kirby i Jarman (1979). Els dos processos cerebrals que constitueixen la base de la conducta intel·ligent són el processament simultani i el processament seqüencial; són les dues estratègies principals del processament de la informació. Tota classe d'estímul és susceptible d'ésser processat o bé seqüencialment (seriació, verbal, anàlisi), o be simultàniament (global, visual, síntesi). Basant-nos en el recull bibliogràfic i amb la convicció de que apropant-nos al coneixement de les peculiaritats del processament de la informació, ens endinsem en la comprensió del procés que mena a la conducta intel·ligent, i per tant, a l'aprenentatge, formulem la següent hipòtesi de treball: en els nens de preescolar (d'entre els 3 i els sis anys) es donaran aquest dos tipus de processament i variaran en funció de l'edat, el sexe, l'atenció, les dificultats d'aprenentatge, els problemes de llenguatge, el bilingüisme, el nivell sociocultural, la dominància manual, el nivell mental i de la presència de patologia. Les diferències que s'esdevinguin ens permetran de formular criteris i pautes per a la intervenció educativa. Els nostres objectius es refonen en mesurar el processament en nens de preescolar de les comarques gironines, verificar la relació de cada tipus de processament amb les variables esmentades, comprovar si s'estableix un paral·lelisme entre el processament i les aportacions de concepció localitzacionista de les funcions cerebrals en base als nostres resultats, i pautes per a la intervenció pedagògica. Quant al mètode, hem seleccionat una mostra representativa dels nens i nenes matriculats a les escoles publiques de les comarques gironines durant el curs 92/93, mitjançant un mostreig aleatori estratificat i per conglomerats. El tamany real de la mostra és de dos-cents seixanta un subjectes. Els instruments emprats han estat els següents: el Test K-ABC de Kaufman & Kaufman (1983) per a la avaluació del processament; un formulari dirigit als pares per a la recollida de la informació pertinent; entrevistes amb les mestres, i el Test de la Figura Humana de Goodenough. Pel que fa referència als resultats de la nostra recerca i en funció dels objectius proposats, constatem els fets següents. En els nens de preescolar, amb edats d'entre els tres i els sis anys, es constata l'existència dels dos tipus de processament cerebral, sense que es doni un predomini d'un sobre de l'altre; ambdós processaments actuen interrelacionadament. Ambdós tipus de processament milloren a mesura que augmenta l'edat, però es constaten diferències derivades del nivell mental: amb un nivell mental normal s'hi associa una millora d'ambdós processaments, mentre que amb un nivell mental deficient només millora fonamentalment el processament seqüencial. Tanmateix, el processament simultani està més relacionat amb les funcions cognitives complexes i és més nivell mental dependent que el processament seqüencial. Tant les dificultats d'aprenentatge com els problemes de llenguatge predominen en els nens i nenes amb un desequilibri significatiu entre ambdós tipus de processament; les dificultats d'aprenentatge estan més relacionades amb una deficiència del processament simultani, mentre que els problemes de llenguatge es relacionen més amb una deficiència en el processament seqüencial. Els nivells socioculturals baixos es relacionen amb resultats inferiors en ambdós tipus de processament. Per altra part, entre els nens bilingües és més freqüent el processament seqüencial significatiu. El test de la Figura Humana es comporta com un marcador de processament simultani i el nivell atencional com un marcador de la gravetat del problema que afecta al processament i en el següent ordre: nivell mental deficient, dificultats, d'aprenentatge i problemes de llenguatge . Les deficiències atencionals van lligades a deficiències en el processament simultani i a la presencia de patologia. Quant a la dominància manual no es constaten diferències en el processament. Finalment, respecte del sexe només podem aportar que quan un dels dos tipus de processament és deficitari,i es dóna per tant, un desequilibri en el processament, predomina significativament el nombre de nens afectats per sobre del de nenes.