9 resultados para Dimensionality
Resumo:
Synthetic metalloporphyrin complexes are often used as analogues of natural systems, and they can be used for the preparation of new Solid Coordination Frameworks (SCFs). In this work, a series of six metalloporphyrinic compounds constructed from different meso substituted metalloporphyrins (phenyl, carboxyphenyl and sulfonatophenyl) have been structurally characterized by means of single crystal X-ray diffraction, IR spectroscopy and elemental analysis. The compounds were classified considering the dimensionality of the crystal array, referred just to coordination bonds, into 0D, 1D and 2D compounds. This way, the structural features and relationships of those crystal structures were analyzed, in order to extract conclusions not only about the dimensionality of the networks but also about possible applications of the as-obtained compounds, focusing the interest on the interactions of coordination and crystallization molecules. These interactions provide the coordination bonds and the cohesion forces which produce SCFs with different dimensionalities.
Resumo:
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer
Resumo:
Recent advances in technology involving magnetic materials require development of novel advanced magnetic materials with improved magnetic and magneto-transport properties and with reduced dimensionality. Therefore magnetic materials with outstanding magnetic characteristics and reduced dimensionality have recently gained much attention. Among these magnetic materials a family of thin wires with reduced geometrical dimensions (of order of 1-30 mu m in diameter) have gained importance within the last few years. These thin wires combine excellent soft magnetic properties (with coercivities up to 4 A/m) with attractive magneto-transport properties (Giant Magneto-impedance effect, GMI, Giant Magneto-resistance effect, GMR) and an unusual re-magnetization process in positive magnetostriction compositions exhibiting quite fast domain wall propagation. In this paper we overview the magnetic and magneto-transport properties of these microwires that make them suitable for microsensor applications.
Resumo:
In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain) during the 1961-2001 period. Three types of downscaling models have been built: (i) analogues, (ii) analogues followed by random forests and (iii) analogues followed by multiple linear regression. The inputs consist of data (predictor fields) taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961-1996) and a test period (19972001), where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.
Resumo:
Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
Artículo Polyhedron 2011
Resumo:
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.
Resumo:
[EN] This study defines and proposes a measurement scale for social entrepreneurship (SE) in its broadest sense. The broad definition of SE covers for-profit firms that use social aims as a core component of their strategy. By pursuing social aims, these firms can boost the value of their products or services for consumers or exploit new business areas. Under this broad definition of SE, profit-seeking and the pursuit of social aims converge, thereby revealing a form of SE that has received little attention in either theoretical or empirical research. To fill this research gap, the present study proposes a measurement scale to measure broad SE in firms. The process used to build the scale draws upon research by Churchill (1979) and DeVellis (1991) and combines the Delphi technique, a pre-test questionnaire and structural equation modelling. The main aim of this research is to develop a scale capable of measuring broad SE in firms. The theoretical basis for the scale is supported by an empirical study in the hotel sector. The scale provides a valid, reliable instrument for measuring broad SE in firms. The scale meets all sociometric properties required of measurement scales in the social sciences, namely dimensionality, reliability and validity.