971 resultados para Control-flow


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interconnecting business processes across systems and organisations is considered to provide significant benefits, such as greater process transparency, higher degrees of integration, facilitation of communication, and consequently higher throughput in a given time interval. However, to achieve these benefits requires tackling constraints. In the context of this paper these are privacy-requirements of the involved workflows and their mutual dependencies. Workflow views are a promising conceptional approach to address the issue of privacy; however this approach requires addressing the issue of interdependencies between workflow view and adjacent private workflow. In this paper we focus on three aspects concerning the support for execution of cross-organisational workflows that have been modelled with a workflow view approach: (i) communication between the entities of a view-based workflow model, (ii) their impact on an extended workflow engine, and (iii) the design of a cross-organisational workflow architecture (CWA). We consider communication aspects in terms of state dependencies and control flow dependencies. We propose to tightly couple private workflow and workflow view with state dependencies, whilst to loosely couple workflow views with control flow dependencies. We introduce a Petri-Net-based state transition approach that binds states of private workflow tasks to their adjacent workflow view-task. On the basis of these communication aspects we develop a CWA for view-based cross-organisational workflow execution. Its concepts are valid for mediated and unmediated interactions and express no choice of a particular technology. The concepts are demonstrated by a scenario, run by two extended workflow management systems. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We provide an abstract command language for real-time programs and outline how a partial correctness semantics can be used to compute execution times. The notions of a timed command, refinement of a timed command, the command traversal condition, and the worst-case and best-case execution time of a command are formally introduced and investigated with the help of an underlying weakest liberal precondition semantics. The central result is a theory for the computation of worst-case and best-case execution times from the underlying semantics based on supremum and infimum calculations. The framework is applied to the analysis of a message transmitter program and its implementation. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software product line modeling aims at capturing a set of software products in an economic yet meaningful way. We introduce a class of variability models that capture the sharing between the software artifacts forming the products of a software product line (SPL) in a hierarchical fashion, in terms of commonalities and orthogonalities. Such models are useful when analyzing and verifying all products of an SPL, since they provide a scheme for divide-and-conquer-style decomposition of the analysis or verification problem at hand. We define an abstract class of SPLs for which variability models can be constructed that are optimal w.r.t. the chosen representation of sharing. We show how the constructed models can be fed into a previously developed algorithmic technique for compositional verification of control-flow temporal safety properties, so that the properties to be verified are iteratively decomposed into simpler ones over orthogonal parts of the SPL, and are not re-verified over the shared parts. We provide tool support for our technique, and evaluate our tool on a small but realistic SPL of cash desks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les logiciels actuels sont de grandes tailles, complexes et critiques. Le besoin de qualité exige beaucoup de tests, ce qui consomme de grandes quantités de ressources durant le développement et la maintenance de ces systèmes. Différentes techniques permettent de réduire les coûts liés aux activités de test. Notre travail s’inscrit dans ce cadre, est a pour objectif d’orienter l’effort de test vers les composants logiciels les plus à risque à l’aide de certains attributs du code source. À travers plusieurs démarches empiriques menées sur de grands logiciels open source, développés avec la technologie orientée objet, nous avons identifié et étudié les métriques qui caractérisent l’effort de test unitaire sous certains angles. Nous avons aussi étudié les liens entre cet effort de test et les métriques des classes logicielles en incluant les indicateurs de qualité. Les indicateurs de qualité sont une métrique synthétique, que nous avons introduite dans nos travaux antérieurs, qui capture le flux de contrôle ainsi que différentes caractéristiques du logiciel. Nous avons exploré plusieurs techniques permettant d’orienter l’effort de test vers des composants à risque à partir de ces attributs de code source, en utilisant des algorithmes d’apprentissage automatique. En regroupant les métriques logicielles en familles, nous avons proposé une approche basée sur l’analyse du risque des classes logicielles. Les résultats que nous avons obtenus montrent les liens entre l’effort de test unitaire et les attributs de code source incluant les indicateurs de qualité, et suggèrent la possibilité d’orienter l’effort de test à l’aide des métriques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research employs solid-state actuators for delay of flow separation seen in airfoils at low Reynolds numbers. The flow control technique investigated here is aimed for a variable camber airfoil that employs two active surfaces and a single four-bar (box) mechanism as the internal structure. To reduce separation, periodic excitation to the flow around the leading edge of the airfoil is induced by a total of nine piezocomposite actuated clamped-free unimorph benders distributed in the spanwise direction. An electromechanical model is employed to design an actuator capable of high deformations at the desired frequency for lift improvement at post-stall angles. The optimum spanwise distribution of excitation for increasing lift coefficient is identified experimentally in the wind tunnel. A 3D (non-uniform) excitation distribution achieved higher lift enhancement in the post-stall region with lower power consumption when compared to the 2D (uniform) excitation distribution. A lift coefficient increase of 18.4% is achieved with the identified non-uniform excitation mode at the bender resonance frequency of 125 Hz, the flow velocity of 5 m/s and at the reduced frequency of 3.78. The maximum lift (Clmax) is increased 5.2% from the baseline. The total power consumption of the flow control technique is 639 mW(RMS).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research presented here employs solid-state actuators for flow separation delay or for forced attachment of separated flow seen in airfoils at low Reynolds numbers. To reduce separation, periodic excitation to the flow around the leading edge of the airfoil is induced by Macro-Fiber Composite actuated clamped-free unimorph benders. An electromechanical model of the unimorph is briefly presented and parametric study is conducted to aid the design of a unimorph to output high deformation at a desired frequency. The optimum frequency and amplitude for lift improvement at post-stall angles are identified experimentally. Along with aerodynamic force and structural displacement measurements, helium bubble flow visualization is used to verify existing separated flow, and the attached flow induced by flow control. The lift enhancement induced by several flow control techniques is compared. A symmetric and non-uniform (3D) flow excitation results in the maximum lift enhancement at post-stall region at the lowest power consumption level. A maximum lift coefficient increase of 27.5% (in the post-stall region) is achieved at 125 Hz periodic excitation, with the 3D symmetric actuation mode at 5 m/s and the reduced frequency of 3.78. C(l,max) is increased 7.6% from the baseline.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Peptidergic mechanisms influencing the resistance of the gastrointestinal vascular bed of the estuarine crocodile, Crocodylus porosus, were investigated. The gut was perfused in situ via the mesenteric and the celiac arteries, and the effects of different neuropeptides were tested using bolus injections. Effects on vascular resistance were recorded as changes in inflow pressures. Peptides found in sensory neurons [substance P, neurokinin A, and calcitonin gene-related peptide (CGRP)] all caused significant relaxation of the celiac vascular bed, as did vasoactive intestinal polypeptide (VIP), another well-known vasodilator. Except for VIP, the peptides also induced transitory gut contractions. Somatostatin and neuropeptide Y (NPY), which coexist in adrenergic neurons of the C. porosus, induced vasoconstriction in the celiac vascular bed without affecting the gut motility. Galanin caused vasoconstriction and occasionally activated the gut wall. To elucidate direct effects on individual vessels, the different peptides were tested on isolated ring preparations of the mesenteric and celiac arteries. Only CGRP and VIP relaxed the epinephrine-precontracted celiac artery, whereas the effects on the mesenteric artery were variable. Somatostatin and NPY did not affect the resting tonus of these vessels, but somatostatin potentiated the epinephrine-induced contraction of the celiac artery. Immunohistochemistry revealed the existence and localization of the above-mentioned peptides in nerve fibers innervating vessels of different sizes in the gut region. These data support the hypothesis of an important role for neuropeptides in the control of the vascular bed of the gastrointestinal tract in C. porosus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The control of the nitrate recirculation flow in a predenitrification system is addressed. An elementary mass balance analysis on the utilisation efficiency of the influent biodegradable COD (bCOD) for nitrate removal indicates that the control problem can be broken down into two parts: maintaining the anoxic zone anoxic (i.e. nitrate is present throughout the anoxic zone) and maximising the usage of influent soluble bCOD for denitrification. Simulation studies using the Simulation Benchmark developed in the European COST program show that both objectives can be achieved by maintaining the nitrate concentration at the outlet of the anoxic zone at around 2 mgN/L. This setpoint appears to be robust towards variations in the influent characteristics and sludge kinetics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the Direct Power Control of Three-Phase Matrix Converters (DPC-MC) operating as Unified Power Flow Controllers (UPFC). Since matrix converters allow direct AC/AC power conversion without intermediate energy storage link, the resulting UPFC has reduced volume and cost, together with higher reliability. Theoretical principles of DPC-MC method are established based on an UPFC model, together with a new direct power control approach based on sliding mode control techniques. As a result, active and reactive power can be directly controlled by selection of an appropriate switching state of matrix converter. This new direct power control approach associated to matrix converters technology guarantees decoupled active and reactive power control, zero error tracking, fast response times and timely control actions. Simulation results show good performance of the proposed system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a direct power control (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFCs). Matrix converters (MCs) allow the direct ac/ac power conversion without dc energy storage links; therefore, the MC-based UPFC (MC-UPFC) has reduced volume and cost, reduced capacitor power losses, together with higher reliability. Theoretical principles of direct power control (DPC) based on sliding mode control techniques are established for an MC-UPFC dynamic model including the input filter. As a result, line active and reactive power, together with ac supply reactive power, can be directly controlled by selecting an appropriate matrix converter switching state guaranteeing good steady-state and dynamic responses. Experimental results of DPC controllers for MC-UPFC show decoupled active and reactive power control, zero steady-state tracking error, and fast response times. Compared to an MC-UPFC using active and reactive power linear controllers based on a modified Venturini high-frequency PWM modulator, the experimental results of the advanced DPC-MC guarantee faster responses without overshoot and no steady-state error, presenting no cross-coupling in dynamic and steady-state responses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aerodynamic drag is known to be one of the factors contributing more to increased aircraft fuel consumption. The primary source of skin friction drag during flight is the boundary layer separation. This is the layer of air moving smoothly in the immediate vicinity of the aircraft. In this paper we discuss a cyber-physical system approach able of performing an efficient suppression of the turbulent flow by using a dense sensing deployment to detect the low pressure region and a similarly dense deployment of actuators to manage the turbulent flow. With this concept, only the actuators in the vicinity of a separation layer are activated, minimizing power consumption and also the induced drag.