891 resultados para linear and nonlinear systems identification
Resumo:
El Territorio hoy es visto como una totalidad organizada que no puede ser pensada separando cada uno de los elementos que la componen; cada uno de ellos es definido por su relación con los otros elementos. Así, un pensamiento que integra diferentes disciplinas y saberes comienza a manejar una realidad que lejos está de definir certezas inamovibles, y comienza a vislumbrar horizontes estratégicos. La adaptación a la no linealidad de las relaciones que se dan sobre el territorio, y la diferencia de velocidades en las que actúan los distintos actores, nos exige hacer de la flexibilidad una característica esencial de la metodología de planificación estratégica. La multi-causalidad de los fenómenos que estructuran el territorio nos obliga a construir criterios cualitativos, entendiendo que nos es imposible la medición de estas cadenas causales y su reconstrucción completa en el tiempo; sin dejar por ello de edificar un marco profundo de acción y transformación que responda a una realidad cierta y veraz. Los fenómenos producidos sobre el territorio nunca actúan de manera aislada, lo que implica una responsabilidad a la hora de comprender las sinergias y la restricción que afectan los resultados de los procesos desatados. La presente ponencia corresponde a la Segunda Fase del proceso de identificación estratégica de los proyectos Plan Estratégico Territorial (PET) que se inició en el año 2005; dicho Plan es llevada a cabo por la Subsecretaría de Planificación Territorial del Ministerio de Planificación Federal y fue abordado sobre la base de tres pretensiones: institucionalizar el ejercicio del pensamiento estratégico, fortalecer la metodología de trabajo transdisciplinaria y multisectorial, y diseñar un sistema de ponderación de proyectos estratégicos de infraestructura, tanto a nivel provincial como nacional, con una fuerte base cualitativa. Este proceso dio como resultado una cartera ponderada de proyectos de infraestructura conjuntamente con una metodología que permitió consolidar los equipos provinciales de planificación, tanto en su relación con los decisores políticos como con los actores de los múltiples sectores del gobierno, y en estos resultados consolidar y reforzar una cultura del pensamiento estratégico sobre el territorio
Resumo:
El Territorio hoy es visto como una totalidad organizada que no puede ser pensada separando cada uno de los elementos que la componen; cada uno de ellos es definido por su relación con los otros elementos. Así, un pensamiento que integra diferentes disciplinas y saberes comienza a manejar una realidad que lejos está de definir certezas inamovibles, y comienza a vislumbrar horizontes estratégicos. La adaptación a la no linealidad de las relaciones que se dan sobre el territorio, y la diferencia de velocidades en las que actúan los distintos actores, nos exige hacer de la flexibilidad una característica esencial de la metodología de planificación estratégica. La multi-causalidad de los fenómenos que estructuran el territorio nos obliga a construir criterios cualitativos, entendiendo que nos es imposible la medición de estas cadenas causales y su reconstrucción completa en el tiempo; sin dejar por ello de edificar un marco profundo de acción y transformación que responda a una realidad cierta y veraz. Los fenómenos producidos sobre el territorio nunca actúan de manera aislada, lo que implica una responsabilidad a la hora de comprender las sinergias y la restricción que afectan los resultados de los procesos desatados. La presente ponencia corresponde a la Segunda Fase del proceso de identificación estratégica de los proyectos Plan Estratégico Territorial (PET) que se inició en el año 2005; dicho Plan es llevada a cabo por la Subsecretaría de Planificación Territorial del Ministerio de Planificación Federal y fue abordado sobre la base de tres pretensiones: institucionalizar el ejercicio del pensamiento estratégico, fortalecer la metodología de trabajo transdisciplinaria y multisectorial, y diseñar un sistema de ponderación de proyectos estratégicos de infraestructura, tanto a nivel provincial como nacional, con una fuerte base cualitativa. Este proceso dio como resultado una cartera ponderada de proyectos de infraestructura conjuntamente con una metodología que permitió consolidar los equipos provinciales de planificación, tanto en su relación con los decisores políticos como con los actores de los múltiples sectores del gobierno, y en estos resultados consolidar y reforzar una cultura del pensamiento estratégico sobre el territorio
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.
Resumo:
Public participation is increasingly advocated as a necessary feature of natural resources management. The EU Water Framework Directive (WFD) is such an example, as it prescribes participatory processes as necessary features in basin management plans (EC 2000). The rationale behind this mandate is that involving interest groups ideally yields higher-quality decisions, which are arguably more likely to meet public acceptance (Pahl-Wostl, 2006). Furthermore, failing to involve stakeholders in policy-making might hamper the implementation of management initiatives, as controversial decisions can lead pressure lobbies to generate public opposition (Giordano et al. 2005, Mouratiadou and Moran 2007).
Resumo:
We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking.
Resumo:
We describe a simple, public domain, HTML package for LP/CLP systems. The package allows generating HTML documents easily from LP/CLP systems, including HTML forms. It also provides facilities for parsing the input provided by HTML forms, as well as for creating standalone form handlers. The purpose of this document is to serve as a user's manual as well as a short description of the capabilities of the package. The package was originally developed for SICStus Prolog and the UPM &-Prolog/CIAO systems, but has been adapted to a number of popular LP/CLP systems. The document is also a WWW/HTML primer, containing sufficient information for developing medium complexity WWW applications in Prolog and other LP and CLP languages.
Resumo:
García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model
Resumo:
In laser-plasma experiments, we observed that ion acceleration from the Coulomb explosion of the plasma channel bored by the laser, is prevented when multiple plasma instabilities such as filamentation and hosing, and nonlinear coherent structures (vortices/post-solitons) appear in the wake of an ultrashort laser pulse. The tailoring of the longitudinal plasma density ramp allows us to control the onset of these insabilities. We deduced that the laser pulse is depleted into these structures in our conditions, when a plasma at about 10% of the critical density exhibits a gradient on the order of 250 {\mu}m (gaussian fit), thus hindering the acceleration. A promising experimental setup with a long pulse is demonstrated enabling the excitation of an isolated coherent structure for polarimetric measurements and, in further perspectives, parametric studies of ion plasma acceleration efficiency.
Resumo:
Sequential estimation of the success probability p in inverse binomial sampling is considered in this paper. For any estimator pˆ , its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters a and b for pˆ
and pˆ>p , respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as p→0, and which guarantee that, for any p∈(0,1), the risk is lower than its asymptotic value. This allows selecting the required number of successes, r, to meet a prescribed quality irrespective of the unknown p. In addition, the proposed estimators are shown to be approximately minimax when a/b does not deviate too much from 1, and asymptotically minimax as r→∞ when a=b.
Resumo:
A review of existing studies about LCA of PV systems has been carried out. The data from this review have been completed with our own figures in order to calculate the Energy Payback Time of double and horizontal axis tracking and fixed systems. The results of this metric span from 2 to 5 years for the latitude and global irradiation ranges of the geographical area comprised between −10◦ to 10◦ of longitude, and 30◦ to 45◦ of latitude. With the caution due to the uncertainty of the sources of information, these results mean that a GCPVS is able to produce back the energy required for its existence from 6 to 15 times during a life cycle of 30 years. When comparing tracking and fixed systems, the great importance of the PV generator makes advisable to dedicate more energy to some components of the system in order to increase the productivity and to obtain a higher performance of the component with the highest energy requirement. Both double axis and horizontal axis trackers follow this way, requiring more energy in metallic structure, foundations and wiring, but this higher contribution is widely compensated by the improved productivity of the system.
Resumo:
The solaR package allows for reproducible research both for photovoltaics (PV) systems performance and solar radiation. It includes a set of classes, methods and functions to calculate the sun geometry and the solar radiation incident on a photovoltaic generator and to simulate the performance of several applications of the photovoltaic energy. This package performs the whole calculation procedure from both daily and intradaily global horizontal irradiation to the final productivity of grid-connected PV systems and water pumping PV systems. It is designed using a set of S4 classes whose core is a group of slots with multivariate time series. The classes share a variety of methods to access the information and several visualization methods. In addition, the package provides a tool for the visual statistical analysis of the performance of a large PV plant composed of several systems. Although solaR is primarily designed for time series associated to a location defined by its latitude/longitude values and the temperature and irradiation conditions, it can be easily combined with spatial packages for space-time analysis.
Resumo:
Dynamically Reconfigurable Systems are attracting a growing interest, mainly due to the emergence of novel applications based on this technology. However, commercial tools do not provide enough flexibility to design solutions, while keeping an acceptable design productivity. In this paper, a novel design flow is proposed, targeting dynamically reconfigurable systems. It is fully supported by a tool called Dreams, which is able to implement flexible systems, starting from a set of netlists corresponding to the modules, as well as a system description provided by the user. The tool automatically post-processes the nets, implementing a solution for the communications between reconfigurable regions, as well as the handling of routing conflicts, by means of a custom router. Since the design process of every module and the static system are independent, the proposed flow is compatible with system upgrade at run-time. In this paper, a use case corresponding to the design of a highly regular and parallel mesh-type architecture is described, in order to show the architectural flexibility offered by the tool.