190 resultados para decompositions
Resumo:
Thermal decompositions of hydrazinium hydrogen oxalate (HHOX) and dihydrazinium oxalate (DOX) have been studied. DOX on heating is converted into HHOX and thereafter both follow the same pattern of decomposition.
Resumo:
The objective was to measure productivity growth and its components in Finnish agriculture, especially in dairy farming. The objective was also to compare different methods and models - both parametric (stochastic frontier analysis) and non-parametric (data envelopment analysis) - in estimating the components of productivity growth and the sensitivity of results with respect to different approaches. The parametric approach was also applied in the investigation of various aspects of heterogeneity. A common feature of the first three of five articles is that they concentrate empirically on technical change, technical efficiency change and the scale effect, mainly on the basis of the decompositions of Malmquist productivity index. The last two articles explore an intermediate route between the Fisher and Malmquist productivity indices and develop a detailed but meaningful decomposition for the Fisher index, including also empirical applications. Distance functions play a central role in the decomposition of Malmquist and Fisher productivity indices. Three panel data sets from 1990s have been applied in the study. The common feature of all data used is that they cover the periods before and after Finnish EU accession. Another common feature is that the analysis mainly concentrates on dairy farms or their roughage production systems. Productivity growth on Finnish dairy farms was relatively slow in the 1990s: approximately one percent per year, independent of the method used. Despite considerable annual variation, productivity growth seems to have accelerated towards the end of the period. There was a slowdown in the mid-1990s at the time of EU accession. No clear immediate effects of EU accession with respect to technical efficiency could be observed. Technical change has been the main contributor to productivity growth on dairy farms. However, average technical efficiency often showed a declining trend, meaning that the deviations from the best practice frontier are increasing over time. This suggests different paths of adjustment at the farm level. However, different methods to some extent provide different results, especially for the sub-components of productivity growth. In most analyses on dairy farms the scale effect on productivity growth was minor. A positive scale effect would be important for improving the competitiveness of Finnish agriculture through increasing farm size. This small effect may also be related to the structure of agriculture and to the allocation of investments to specific groups of farms during the research period. The result may also indicate that the utilization of scale economies faces special constraints in Finnish conditions. However, the analysis of a sample of all types of farms suggested a more considerable scale effect than the analysis on dairy farms.
Resumo:
Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.
Resumo:
Lead zir conyl oxalate hexahydrate (LZO) and lead titanyl zirconyl oxalate hydrate (LTZO) are prepared and characterized. Their thermal decompositions have been investigated by thermoanalytical and gas analysis techniques. The decomposition in air or oxygen has three steps — dehydration, decomposition of the oxalate to a carbonate and the decomposition of carbonate to PbZrO3. In non oxidising atmosphere, partial reduction of Pb(II) to Pb(0) takes place at the oxalate decomposition step. The formation of free metallic lead affects the stoichiometry of the intermediate carbonate and yields a mixture of Pb(Ti,Zr)O3 and ZrO2 as the final products. By maintaining oxidising atmosphere and low heating rate, direct preparation of stoichiometric, crystalline Pb(Ti,Zr)O3 at 550°C is possible from the corresponding oxalate precursor.
Resumo:
Equations for solid-state decompositions which are controlled by the phase-boundary movement and nucleation have been examined using ammonium perchlorate/polystyrene propellant decomposition at 503 K and 533 K. It was found that 3 different equations governed by the nucleation process show a good fit of data at these temperatures. However, the best fit was obtained for the following Avrami-Erofeev equation, [-In (1 - α]1/4=kt.
Resumo:
A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.
Resumo:
Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]
Resumo:
An important tool in signal processing is the use of eigenvalue and singular value decompositions for extracting information from time-series/sensor array data. These tools are used in the so-called subspace methods that underlie solutions to the harmonic retrieval problem in time series and the directions-of-arrival (DOA) estimation problem in array processing. The subspace methods require the knowledge of eigenvectors of the underlying covariance matrix to estimate the parameters of interest. Eigenstructure estimation in signal processing has two important classes: (i) estimating the eigenstructure of the given covariance matrix and (ii) updating the eigenstructure estimates given the current estimate and new data. In this paper, we survey some algorithms for both these classes useful for harmonic retrieval and DOA estimation problems. We begin by surveying key results in the literature and then describe, in some detail, energy function minimization approaches that underlie a class of feedback neural networks. Our approaches estimate some or all of the eigenvectors corresponding to the repeated minimum eigenvalue and also multiple orthogonal eigenvectors corresponding to the ordered eigenvalues of the covariance matrix. Our presentation includes some supporting analysis and simulation results. We may point out here that eigensubspace estimation is a vast area and all aspects of this cannot be fully covered in a single paper. (C) 1995 Academic Press, Inc.
Resumo:
Experimental quantum simulation of a Hamiltonian H requires unitary operator decomposition (UOD) of its evolution unitary U = exp(-iHt) in terms of native unitary operators of the experimental system. Here, using a genetic algorithm, we numerically evaluate the most generic UOD (valid over a continuous range of Hamiltonian parameters) of the unitary operator U, termed fidelity-profile optimization. The optimization is obtained by systematically evaluating the functional dependence of experimental unitary operators (such as single-qubit rotations and time-evolution unitaries of the system interactions) to the Hamiltonian (H) parameters. Using this technique, we have solved the experimental unitary decomposition of a controlled-phase gate (for any phase value), the evolution unitary of the Heisenberg XY interaction, and simulation of the Dzyaloshinskii-Moriya (DM) interaction in the presence of the Heisenberg XY interaction. Using these decompositions, we studied the entanglement dynamics of a Bell state in the DM interaction and experimentally verified the entanglement preservation procedure of Hou et al. Ann. Phys. (N.Y.) 327, 292 (2012)] in a nuclear magnetic resonance quantum information processor.
Resumo:
A new representation of spatio-temporal random processes is proposed in this work. In practical applications, such processes are used to model velocity fields, temperature distributions, response of vibrating systems, to name a few. Finding an efficient representation for any random process leads to encapsulation of information which makes it more convenient for a practical implementations, for instance, in a computational mechanics problem. For a single-parameter process such as spatial or temporal process, the eigenvalue decomposition of the covariance matrix leads to the well-known Karhunen-Loeve (KL) decomposition. However, for multiparameter processes such as a spatio-temporal process, the covariance function itself can be defined in multiple ways. Here the process is assumed to be measured at a finite set of spatial locations and a finite number of time instants. Then the spatial covariance matrix at different time instants are considered to define the covariance of the process. This set of square, symmetric, positive semi-definite matrices is then represented as a third-order tensor. A suitable decomposition of this tensor can identify the dominant components of the process, and these components are then used to define a closed-form representation of the process. The procedure is analogous to the KL decomposition for a single-parameter process, however, the decompositions and interpretations vary significantly. The tensor decompositions are successfully applied on (i) a heat conduction problem, (ii) a vibration problem, and (iii) a covariance function taken from the literature that was fitted to model a measured wind velocity data. It is observed that the proposed representation provides an efficient approximation to some processes. Furthermore, a comparison with KL decomposition showed that the proposed method is computationally cheaper than the KL, both in terms of computer memory and execution time.
Resumo:
High wind poses a number of hazards in different areas such as structural safety, aviation, and wind energy-where low wind speed is also a concern, pollutant transport, to name a few. Therefore, usage of a good prediction tool for wind speed is necessary in these areas. Like many other natural processes, behavior of wind is also associated with considerable uncertainties stemming from different sources. Therefore, to develop a reliable prediction tool for wind speed, these uncertainties should be taken into account. In this work, we propose a probabilistic framework for prediction of wind speed from measured spatio-temporal data. The framework is based on decompositions of spatio-temporal covariance and simulation using these decompositions. A novel simulation method based on a tensor decomposition is used here in this context. The proposed framework is composed of a set of four modules, and the modules have flexibility to accommodate further modifications. This framework is applied on measured data on wind speed in Ireland. Both short-and long-term predictions are addressed.
Resumo:
For a general tripartite system in some pure state, an observer possessing any two parts will see them in a mixed state. By the consequence of Hughston-Jozsa-Wootters theorem, each basis set of local measurement on the third part will correspond to a particular decomposition of the bipartite mixed state into a weighted sum of pure states. It is possible to associate an average bipartite entanglement ((S) over bar) with each of these decompositions. The maximum value of (S) over bar is called the entanglement of assistance (E-A) while the minimum value is called the entanglement of formation (E-F). An appropriate choice of the basis set of local measurement will correspond to an optimal value of (S) over bar; we find here a generic optimality condition for the choice of the basis set. In the present context, we analyze the tripartite states W and GHZ and show how they are fundamentally different. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Resumen: En este trabajo, calculamos y analizamos el Índice de Severidad de la pobreza o squared poverty gap para el Gran Buenos Aires, en el período 1995-2006. Este índice es una de las tres medidas más conocidas correspondientes a la clase FGT (Foster, Greer and Thorbecke 1984), aunque menos utilizada que la incidencia o head count ratio (calculado por el INDEC), y la brecha de pobreza o poverty gap. El Índice de Severidad de la pobreza tiene en cuenta no sólo la distancia que separa a los pobres de la línea de pobreza (como en el caso de la brecha de pobreza) sino también la desigualdad entre los pobres. Es decir, le da un mayor peso a los hogares que están más alejados de la línea de pobreza. Por lo tanto, este índice cumple con el axioma de transferencia, a diferencia de los otros dos. Calculamos el Índice de Severidad tanto a nivel hogares como individuos. Además, realizamos una descomposición del índice por grupos – según la situación laboral, el nivel de educación, el tamaño del hogar, la edad y el sexo del jefe de hogar; y calculamos el riesgo relativo de cada grupo. Realizamos también una comparación entre los índices de incidencia (INDEC) y severidad. Concluimos presentando los índices de incidencia y severidad para todo el país, y su descomposición por regiones, para el año 2006.
Resumo:
Resumen: Si bien el período de recuperación posterior a la crisis del fin de la convertibilidad mostró mejoras en las mediciones de pobreza y desigualdad monetarias, el análisis de medidas multidimensionales permite detectar un estancamiento en estas mejoras ya a partir del año 2007. Este documento intenta indagar en los componentes de este cambio, mediante un ejercicio de descomposición temporal y por grupos de la medida Alkire-Foster (2007) aplicada a los datos de la Encuesta de la Deuda Social Argentina.
Resumo:
The simplest multiplicative systems in which arithmetical ideas can be defined are semigroups. For such systems irreducible (prime) elements can be introduced and conditions under which the fundamental theorem of arithmetic holds have been investigated (Clifford (3)). After identifying associates, the elements of the semigroup form a partially ordered set with respect to the ordinary division relation. This suggests the possibility of an analogous arithmetical result for abstract partially ordered sets. Although nothing corresponding to product exists in a partially ordered set, there is a notion similar to g.c.d. This is the meet operation, defined as greatest lower bound. Thus irreducible elements, namely those elements not expressible as meets of proper divisors can be introduced. The assumption of the ascending chain condition then implies that each element is representable as a reduced meet of irreducibles. The central problem of this thesis is to determine conditions on the structure of the partially ordered set in order that each element have a unique such representation.
Part I contains preliminary results and introduces the principal tools of the investigation. In the second part, basic properties of the lattice of ideals and the connection between its structure and the irreducible decompositions of elements are developed. The proofs of these results are identical with the corresponding ones for the lattice case (Dilworth (2)). The last part contains those results whose proofs are peculiar to partially ordered sets and also contains the proof of the main theorem.