884 resultados para CONSTRAINED LINEAR-SYSTEMS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The seismic behaviour of one-storey asymmetric structures has been studied since 1970s by a number of researches studies which identified the coupled nature of the translational-to-torsional response of those class of systems leading to severe displacement magnifications at the perimeter frames and therefore to significant increase of local peak seismic demand to the structural elements with respect to those of equivalent not-eccentric systems (Kan and Chopra 1987). These studies identified the fundamental parameters (such as the fundamental period TL normalized eccentricity e and the torsional-to-lateral frequency ratio Ωϑ) governing the torsional behavior of in-plan asymmetric structures and trends of behavior. It has been clearly recognized that asymmetric structures characterized by Ωϑ >1, referred to as torsionally-stiff systems, behave quite different form structures with Ωϑ <1, referred to as torsionally-flexible systems. Previous research works by some of the authors proposed a simple closed-form estimation of the maximum torsional response of one-storey elastic systems (Trombetti et al. 2005 and Palermo et al. 2010) leading to the so called “Alpha-method” for the evaluation of the displacement magnification factors at the corner sides. The present paper provides an upgrade of the “Alpha Method” removing the assumption of linear elastic response of the system. The main objective is to evaluate how the excursion of the structural elements in the inelastic field (due to the reaching of yield strength) affects the displacement demand of one-storey in-plan asymmetric structures. The system proposed by Chopra and Goel in 2007, which is claimed to be able to capture the main features of the non-linear response of in-plan asymmetric system, is used to perform a large parametric analysis varying all the fundamental parameters of the system, including the inelastic demand by varying the force reduction factor from 2 to 5. Magnification factors for different force reduction factor are proposed and comparisons with the results obtained from linear analysis are provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concept of elementary vector is generalised to the case where the steady-state space of the metabolic network is not a flux cone but is a general polyhedron due to further inhomogeneous constraints on the flows through some of the reactions. On one hand, this allows to selectively enumerate elementary modes which satisfy certain optimality criteria and this can yield a large computational gain compared with full enumeration. On the other hand, in contrast to the single optimum found by executing a linear program, this enables a comprehensive description of the set of alternate optima often encountered in flux balance analysis. The concepts are illustrated on a metabolic network model of human cardiac mitochondria.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most commercial project management software packages include planning methods to devise schedules for resource-constrained projects. As it is proprietary information of the software vendors which planning methods are implemented, the question arises how the software packages differ in quality with respect to their resource-allocation capabilities. We experimentally evaluate the resource-allocation capabilities of eight recent software packages by using 1,560 instances with 30, 60, and 120 activities of the well-known PSPLIB library. In some of the analyzed packages, the user may influence the resource allocation by means of multi-level priority rules, whereas in other packages, only few options can be chosen. We study the impact of various complexity parameters and priority rules on the project duration obtained by the software packages. The results indicate that the resource-allocation capabilities of these packages differ significantly. In general, the relative gap between the packages gets larger with increasing resource scarcity and with increasing number of activities. Moreover, the selection of the priority rule has a considerable impact on the project duration. Surprisingly, when selecting a priority rule in the packages where it is possible, both the mean and the variance of the project duration are in general worse than for the packages which do not offer the selection of a priority rule.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The general goal of this thesis is correlating observable properties of organic and metal-organic materials with their ground-state electron density distribution. In a long-term view, we expect to develop empirical or semi-empirical approaches to predict materials properties from the electron density of their building blocks, thus allowing to rationally engineering molecular materials from their constituent subunits, such as their functional groups. In particular, we have focused on linear optical properties of naturally occurring amino acids and their organic and metal-organic derivatives, and on magnetic properties of metal-organic frameworks. For analysing the optical properties and the magnetic behaviour of the molecular or sub-molecular building blocks in materials, we mostly used the more traditional QTAIM partitioning scheme of the molecular or crystalline electron densities, however, we have also investigated a new approach, namely, X-ray Constrained Extremely Localized Molecular Orbitals (XC-ELMO), that can be used in future to extracted the electron densities of crystal subunits. With the purpose of rationally engineering linear optical materials, we have calculated atomic and functional group polarizabilities of amino acid molecules, their hydrogen-bonded aggregates and their metal-organic frameworks. This has enabled the identification of the most efficient functional groups, able to build-up larger electric susceptibilities in crystals, as well as the quantification of the role played by intermolecular interactions and coordinative bonds on modifying the polarizability of the isolated building blocks. Furthermore, we analysed the dependence of the polarizabilities on the one-electron basis set and the many-electron Hamiltonian. This is useful for selecting the most efficient level of theory to estimate susceptibilities of molecular-based materials. With the purpose of rationally design molecular magnetic materials, we have investigated the electron density distributions and the magnetism of two copper(II) pyrazine nitrate metal-organic polymers. High-resolution X-ray diffraction and DFT calculations were used to characterize the magnetic exchange pathways and to establish relationships between the electron densities and the exchange-coupling constants. Moreover, molecular orbital and spin-density analyses were employed to understand the role of different magnetic exchange mechanisms in determining the bulk magnetic behaviour of these materials. As anticipated, we have finally investigated a modified version of the X-ray constrained wavefunction technique, XC-ELMOs, that is not only a useful tool for determination and analysis of experimental electron densities, but also enables one to derive transferable molecular orbitals strictly localized on atoms, bonds or functional groups. In future, we expect to use XC-ELMOs to predict materials properties of large systems, currently challenging to calculate from first-principles, such as macromolecules or polymers. Here, we point out advantages, needs and pitfalls of the technique. This work fulfils, at least partially, the prerequisites to understand materials properties of organic and metal-organic materials from the perspective of the electron density distribution of their building blocks. Empirical or semi-empirical evaluation of optical or magnetic properties from a preconceived assembling of building blocks could be extremely important for rationally design new materials, a field where accurate but expensive first-principles calculations are generally not used. This research could impact the community in the fields of crystal engineering, supramolecular chemistry and, of course, electron density analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The sparse differential resultant dres(P) of an overdetermined system P of generic nonhomogeneous ordinary differential polynomials, was formally defined recently by Li, Gao and Yuan (2011). In this note, a differential resultant formula dfres(P) is defined and proved to be nonzero for linear "super essential" systems. In the linear case, dres(P) is proved to be equal, up to a nonzero constant, to dfres(P*) for the supper essential subsystem P* of P.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The theory and methods of linear algebra are a useful alternative to those of convex geometry in the framework of Voronoi cells and diagrams, which constitute basic tools of computational geometry. As shown by Voigt and Weis in 2010, the Voronoi cells of a given set of sites T, which provide a tesselation of the space called Voronoi diagram when T is finite, are solution sets of linear inequality systems indexed by T. This paper exploits systematically this fact in order to obtain geometrical information on Voronoi cells from sets associated with T (convex and conical hulls, tangent cones and the characteristic cones of their linear representations). The particular cases of T being a curve, a closed convex set and a discrete set are analyzed in detail. We also include conclusions on Voronoi diagrams of arbitrary sets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we deal with parameterized linear inequality systems in the n-dimensional Euclidean space, whose coefficients depend continuosly on an index ranging in a compact Hausdorff space. The paper is developed in two different parametric settings: the one of only right-hand-side perturbations of the linear system, and that in which both sides of the system can be perturbed. Appealing to the backgrounds on the calmness property, and exploiting the specifics of the current linear structure, we derive different characterizations of the calmness of the feasible set mapping, and provide an operative expresion for the calmness modulus when confined to finite systems. In the paper, the role played by the Abadie constraint qualification in relation to calmness is clarified, and illustrated by different examples. We point out that this approach has the virtue of tackling the calmness property exclusively in terms of the system’s data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Includes bibliographies (p. 27-29).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"SFOSR 748."

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"EE 61-4. Research project PRF 30. Contract no. AF 29(600)-1933."