32 resultados para TOTAL ANALYSIS SYSTEMS
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of speed of the analysis, precisión, and usefulness of the information obtained. To this end, design and implementation aspects are discussed for two practical abstract interpretation-based flow analysis systems: MA , the MCC And-parallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained (rom these implementations and, as an example of an application, a study of the usefulness of the mode information obtained in reducing run-time checks in independent and-parallelism.Based on the results obtained, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of both speed and precision of analysis. It discusses design and implementation aspects of two practical abstract interpretation-based flow analysis systems: MA3, the MOO Andparallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained from these implementations. Based on these results, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
Optical filters are crucial elements in optical communication networks. Their influence toward the optical signal will affect the communication quality seriously. In this paper we will study and simulate the optical signal impairment and crosstalk penalty caused by different kinds of filters, which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (F-P). Signal impairment from filter concatenation effect and crosstalk penalty from out-band and in-band are analyzed from Q-penalty, eye opening penalty (EOP) and optical spectrum. The simulation results show that signal impairment and crosstalk penalty induced by the Butterworth filter is the minimum among these four types of filters. Signal impairment caused by filter concatenation effect shows that when center frequency of all filters is aligned perfectly with the laser's frequency, 12 50-GHz Butterworth filters can be cascaded, with 1-dB EOP. This value is reduced to 9 when the center frequency is misaligned with 5 GHz. In the 50-GHz channel spacing DWDM networks, total Q-penalty induced by a pair of Butterworth filters based demultiplexer and multiplexer is lower than 0.5 dB when the filter bandwidth is in the range of 42-46 GHz.
Resumo:
This article analyses the long-term performance of collective off-grid photovoltaic (PV) systems in rural areas. The use of collective PV systems for the electrification of small medium-size villages in developing countries has increased in the recent years. They are basically set up as stand-alone installations (diesel hybrid or pure PV) with no connection with other electrical grids. Their particular conditions (isolated) and usual installation places (far from commercial/industrial centers) require an autonomous and reliable technology. Different but related factors affect their performance and the energy supply; some of them are strictly technical but others depend on external issues like the solar energy resource and users’ energy and power consumption. The work presented is based on field operation of twelve collective PV installations supplying the electricity to off-grid villages located in the province of Jujuy, Argentina. Five of them have PV generators as unique power source while other seven include the support of diesel groups. Load demand evolution, energy productivity and fuel consumption are analyzed. Besides, energy generation strategies (PV/diesel) are also discussed.
Resumo:
The main objective of this paper is to review the state of the art of residential PV systems in France and Belgium. This is done analyzing the operational data of 10650 PV systems (9657 located in France and 993 in Belgium). Three main questions are posed. How much energy do they produce? What level of performance is associated to their production? Which are the key parameters that most influence their quality? During the year 2010, the PV systems in France have produced a mean annual energy of 1163 kWh/kWp in France and 852 kWh/kWp in Belgium. As a whole, the orientation of PV generators causes energy productions to be some 7% inferior to optimally oriented PV systems. The mean Performance Ratio is 76% in France and 78% in Belgium, and the mean Performance Index is 85% in both countries. On average, the real power of the PV modules falls 4.9% below its corresponding nominal power announced on the manufacturer?s datasheet. A brief analysis by PV modules technology has lead to relevant observations about two technologies in particular. On the one hand, the PV systems equipped with Heterojunction with Intrinsic. Thin layer (HIT) modules show performances higher than average. On the other hand, the systems equipped with Copper Indium (di)Selenide (CIS) modules show a real power that is 16 % lower than their nominal value.
Resumo:
When an automobile passes over a bridge dynamic effects are produced in vehicle and structure. In addition, the bridge itself moves when exposed to the wind inducing dynamic effects on the vehicle that have to be considered. The main objective of this work is to understand the influence of the different parameters concerning the vehicle, the bridge, the road roughness or the wind in the comfort and safety of the vehicles when crossing bridges. Non linear finite element models are used for structures and multibody dynamic models are employed for vehicles. The interaction between the vehicle and the bridge is considered by contact methods. Road roughness is described by the power spectral density (PSD) proposed by the ISO 8608. To consider that the profiles under right and left wheels are different but not independent, the hypotheses of homogeneity and isotropy are assumed. To generate the wind velocity history along the road the Sandia method is employed. The global problem is solved by means of the finite element method. First the methodology for modelling the interaction is verified in a benchmark. Following, the case of a vehicle running along a rigid road and subjected to the action of the turbulent wind is analyzed and the road roughness is incorporated in a following step. Finally the flexibility of the bridge is added to the model by making the vehicle run over the structure. The application of this methodology will allow to understand the influence of the different parameters in the comfort and safety of road vehicles crossing wind exposed bridges. Those results will help to recommend measures to make the traffic over bridges more reliable without affecting the structural integrity of the viaduct
Resumo:
This paper presents an analysis of the fault tolerance achieved by an autonomous, fully embedded evolvable hardware system, which uses a combination of partial dynamic reconfiguration and an evolutionary algorithm (EA). It demonstrates that the system may self-recover from both transient and cumulative permanent faults. This self-adaptive system, based on a 2D array of 16 (4×4) Processing Elements (PEs), is tested with an image filtering application. Results show that it may properly recover from faults in up to 3 PEs, that is, more than 18% cumulative permanent faults. Two fault models are used for testing purposes, at PE and CLB levels. Two self-healing strategies are also introduced, depending on whether fault diagnosis is available or not. They are based on scrubbing, fitness evaluation, dynamic partial reconfiguration and in-system evolutionary adaptation. Since most of these adaptability features are already available on the system for its normal operation, resource cost for self-healing is very low (only some code additions in the internal microprocessor core)
Resumo:
This article proposes an agent-oriented methodology called MAS-CommonKADS and develops a case study. This methodology extends the knowledge engineering methodology CommonKADSwith techniquesfrom objectoriented and protocol engineering methodologies. The methodology consists of the development of seven models: Agent Model, that describes the characteristics of each agent; Task Model, that describes the tasks that the agents carry out; Expertise Model, that describes the knowledge needed by the agents to achieve their goals; Organisation Model, that describes the structural relationships between agents (software agents and/or human agents); Coordination Model, that describes the dynamic relationships between software agents; Communication Model, that describes the dynamic relationships between human agents and their respective personal assistant software agents; and Design Model, that refines the previous models and determines the most suitable agent architecture for each agent, and the requirements of the agent network.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
El presente estudio analiza las intenciones de los usuarios acerca del uso de sistemas de tele-enseñanza LMS (Learning Management Systems, basándose en un modelo que integra el Modelo de Aceptación Tecnológica (TAM, Technology Acceptance Model, la Teoría del Comportamiento Percibido (TPB, Theory of Planned Behavior) y la Teoría Unificada de la Aceptación y Uso de la Tecnología (UTAUT, Unified Theory of Acceptance and Use of Technology), tomando la edad como variable moderadora. Así, este artículo estudia la influencia de la intención conductual, la actitud hacia el uso, la facilidad de uso percibida, la utilidad percibida, la norma subjetiva y la influencia social en la intención de utilizar sistemas e-learning LMS. Como antecedentes de estos factores de influencia se plantean las características del sistema y del usuario. El resultado de la revisión teórica es un modelo unificado que ha sido validado con datos recogidos de 94 estudiantes a través de un cuestionario en línea. Estos datos han sido analizados utilizando la técnica de mínimos cuadrados parciales, y los principales resultados confirman la relevancia predictiva del modelo para usuarios de entre 26 y 35 años y de entre 36 y 45 años.
Resumo:
This paper is about analysis and assess of three experiences on telematic and electronic voting dealing with such aspects as security and achievement of the social requirements. These experiences have been chosen taking into account the deepness of the public documentation and the technological challenge they faces.
Resumo:
Idea Management Systems are an implementation of open innovation notion in the Web environment with the use of crowdsourcing techniques. In this area, one of the popular methods for coping with large amounts of data is duplicate de- tection. With our research, we answer a question if there is room to introduce more relationship types and in what degree would this change affect the amount of idea metadata and its diversity. Furthermore, based on hierarchical dependencies between idea relationships and relationship transitivity we propose a number of methods for dataset summarization. To evaluate our hypotheses we annotate idea datasets with new relationships using the contemporary methods of Idea Management Systems to detect idea similarity. Having datasets with relationship annotations at our disposal, we determine if idea features not related to idea topic (e.g. innovation size) have any relation to how annotators perceive types of idea similarity or dissimilarity.
Resumo:
Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.
Resumo:
This paper is about analysis and assess of three experiences on telematic and electronic voting dealing with such aspects as security and achievement of the social requirements. These experiences have been chosen taking into account the deepness of the public documentation and the technological challenge they faces.