959 resultados para automated software testing
Resumo:
La normalización facilita la comunicación y permite el intercambio de información con cualquier institución nacional o internacional. Este objetivo es posible a través de los formatos de comunicación para intercambio de información automatizada como CEPAL, MARC., FCC.La Escuela de Bibliotecología, Documentación e Información de la Universidad Nacional utiliza el software MICROISIS en red para la enseñanza. Las bases de datos que se diseñan utilizan el formato MARC y para la descripción bibliográfica las RCAA2.Se presenta la experiencia con la base de datos “I&D” sobre desarrollo rural, presentando la Tabla de Definición de Campos, la hoja de trabajo, el formato de despliegue y Tabla de selección de Campos.
Resumo:
The document begins by describing the problem of budget information units and the high cost of commercial software that specializes in library automation. Describes the origins of free software and its meaning. Mentioned the three levels of automation in library: catalog automation, generation of repositories and full automation. Mentioned the various free software applications for each of the levels and offers a number of advantages and disadvantages in the use of these products. Concludes that the automation project is hard but full of satisfaction, emphasizing that there is no cost-free project, because if it is true that free software is free, there are other costs related to implementation, training and commissioning project progress.
Resumo:
From their early days, Electrical Submergible Pumping (ESP) units have excelled in lifting much greater liquid rates than most of the other types of artificial lift and developed by good performance in wells with high BSW, in onshore and offshore environments. For all artificial lift system, the lifetime and frequency of interventions are of paramount importance, given the high costs of rigs and equipment, plus the losses coming from a halt in production. In search of a better life of the system comes the need to work with the same efficiency and security within the limits of their equipment, this implies the need for periodic adjustments, monitoring and control. How is increasing the prospect of minimizing direct human actions, these adjustments should be made increasingly via automation. The automated system not only provides a longer life, but also greater control over the production of the well. The controller is the brain of most automation systems, it is inserted the logic and strategies in the work process in order to get you to work efficiently. So great is the importance of controlling for any automation system is expected that, with better understanding of ESP system and the development of research, many controllers will be proposed for this method of artificial lift. Once a controller is proposed, it must be tested and validated before they take it as efficient and functional. The use of a producing well or a test well could favor the completion of testing, but with the serious risk that flaws in the design of the controller were to cause damage to oil well equipment, many of them expensive. Given this reality, the main objective of the present work is to present an environment for evaluation of fuzzy controllers for wells equipped with ESP system, using a computer simulator representing a virtual oil well, a software design fuzzy controllers and a PLC. The use of the proposed environment will enable a reduction in time required for testing and adjustments to the controller and evaluated a rapid diagnosis of their efficiency and effectiveness. The control algorithms are implemented in both high-level language, through the controller design software, such as specific language for programming PLCs, Ladder Diagram language.
Resumo:
Despite the development of improved performance test protocols by renowned researchers, there are still road networks which experience premature cracking and failure. One area of major concern in asphalt science and technology, especially in cold regions in Canada is thermal (low temperature) cracking. Usually right after winter periods, severe cracks are seen on poorly designed road networks. Quality assurance tests based on improved asphalt performance protocols have been implemented by government agencies to ensure that roads being constructed are at the required standard but asphalt binders that pass these quality assurance tests still crack prematurely. While it would be easy to question the competence of the quality assurance test protocols, it should be noted that performance tests which are being used and were repeated in this study, namely the extended bending beam rheometer (EBBR) test, double edge-notched tension test (DENT), dynamic shear rheometer (DSR) test and X-ray fluorescence (XRF) analysis have all been verified and proven to successfully predict asphalt pavement behaviour in the field. Hence this study looked to probe and test the quality and authenticity of the asphalt binders being used for road paving. This study covered thermal cracking and physical hardening phenomenon by comparing results from testing asphalt binder samples obtained from the storage ‘tank’ prior to paving (tank samples) and recovered samples for the same contracts with aim of explaining why asphalt binders that have passed quality assurance tests are still prone to fail prematurely. The study also attempted to find out if the short testing time and automated procedure of torsion bar experiments can replace the established but tedious procedure of the EBBR. In the end, it was discovered that significant differences in performance and composition exist between tank and recovered samples for the same contracts. Torsion bar experimental data also indicated some promise in predicting physical hardening.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Introducción: La rápida detección e identificación bacteriana es fundamental para el manejo de los pacientes críticos que presentan una patología infecciosa, esto requiere de métodos rápidos para el inicio de un correcto tratamiento. En Colombia se usan pruebas microbiología convencional. No hay estudios de espectrofotometría de masas en análisis de muestras de pacientes críticos en Colombia. Objetivo general: Describir la experiencia del análisis microbiológico mediante la tecnología MALDI-TOF MS en muestras tomadas en la Fundación Santa Fe de Bogotá. Materiales y Métodos: Entre junio y julio de 2013, se analizaron 147 aislamientos bacterianos de muestras clínicas, las cuales fueron procesadas previamente por medio del sistema VITEK II. Los aislamientos correspondieron a 88 hemocultivos (60%), 28 urocultivos (19%), y otros cultivos 31 (21%). Resultados: Se obtuvieron 147 aislamientos con identificación adecuada a nivel de género y/o especie así: en el 88.4% (130 muestras) a nivel de género y especie, con una concordancia del 100% comparado con el sistema VITEK II. El porcentaje de identificación fue de 66% en el grupo de bacilos gram negativos no fermentadores, 96% en enterobacterias, 100% en gérmenes fastidiosos, 92% en cocos gram positivos, 100% bacilos gram negativos móviles y 100% en levaduras. No se encontró ninguna concordancia en bacilos gram positivos y gérmenes del genero Aggregatibacter. Conclusiones: El MALDI-TOF es una prueba rápida para la identificación microbiológica de género y especie que concuerda con los resultados obtenidos de manera convencional. Faltan estudios para hacer del MALDI-TOF MS la prueba oro en identificación de gérmenes.
Resumo:
Through modelling activity, experimental campaigns, test bench and on-field validation, a complete powertrain for a BEV has been designed, assembled and used in a motorsport competition. The activity can be split in three main subjects, representing the three key components of an BEV vehicle. First of all a model of the entire powertrain has been developed in order to understand how the various design choices will influence the race lap-time. The data obtained was then used to design, build and test a first battery pack. After bench tests and track tests, it was understood that by using all the cell charac-teristics, without breaking the rules limitations, higher energy and power densities could have been achieved. An updated battery pack was then designed, produced and raced with at Motostudent 2018 re-sulting in a third place at debut. The second topic of this PhD was the design of novel inverter topologies. Three inverters have been de-signed, two of them using Gallium Nitride devices, a promising semiconductor technology that can achieve high switching speeds while maintaining low switching losses. High switching frequency is crucial to reduce the DC-Bus capacitor and then increase the power density of 3 phase inverters. The third in-verter uses classic Silicon devices but employs a ZVS (Zero Voltage Switching) topology. Despite the in-creased complexity of both the hardware and the control software, it can offer reduced switching losses by using conventional and established silicon mosfet technology. Finally, the mechanical parts of a three phase permanent magnet motor have been designed with the aim to employ it in UniBo Motorsport’s 2020 Formula Student car.
Resumo:
Modern networks are undergoing a fast and drastic evolution, with software taking a more predominant role. Virtualization and cloud-like approaches are replacing physical network appliances, reducing the management burden of the operators. Furthermore, networks now expose programmable interfaces for fast and dynamic control over traffic forwarding. This evolution is backed by standard organizations such as ETSI, 3GPP, and IETF. This thesis will describe which are the main trends in this evolution. Then, it will present solutions developed during the three years of Ph.D. to exploit the capabilities these new technologies offer and to study their possible limitations to push further the state-of-the-art. Namely, it will deal with programmable network infrastructure, introducing the concept of Service Function Chaining (SFC) and presenting two possible solutions, one with Openstack and OpenFlow and the other using Segment Routing and IPv6. Then, it will continue with network service provisioning, presenting concepts from Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). These concepts will be applied to network slicing for mission-critical communications and Industrial IoT (IIoT). Finally, it will deal with network abstraction, with a focus on Intent Based Networking (IBN). To summarize, the thesis will include solutions for data plane programming with evaluation on well-known platforms, performance metrics on virtual resource allocations, novel practical application of network slicing on mission-critical communications, an architectural proposal and its implementation for edge technologies in Industrial IoT scenarios, and a formal definition of intent using a category theory approach.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
Linear cascade testing serves a fundamental role in the research, development, and design of turbomachines as it is a simple yet very effective way to compute the performance of a generic blade geometry. These kinds of experiments are usually carried out in specialized wind tunnel facilities. This thesis deals with the numerical characterization and subsequent partial redesign of the S-1/C Continuous High Speed Wind Tunnel of the Von Karman Institute for Fluid Dynamics. The current facility is powered by a 13-stage axial compressor that is not powerful enough to balance the energy loss experienced when testing low turning airfoils. In order to address this issue a performance assessment of the wind tunnel was performed under several flow regimes via numerical simulations. After that, a redesign proposal aimed at reducing the pressure loss was investigated. This consists of a linear cascade of turning blades to be placed downstream of the test section and designed specifically for the type of linear cascade being tested. An automatic design procedure was created taking as input parameters those measured at the outlet of the cascade. The parametrization method employed Bézier curves to produce an airfoil geometry that could be imported into a CAD software so that a cascade could be designed. The proposal was simulated via CFD analysis and proved to be effective in reducing pressure losses up to 41%. The same tool developed in this thesis could be adopted to design similar apparatuses and could also be optimized and specialized for the design of turbomachines components.
Resumo:
The quantity of electric energy utilized by a home, a business, or an electrically powered device is measured by an electricity meter, also known as an electric meter, electrical meter, or energy meter. Electric meters located at customers' locations are used by electric providers for billing. They are usually calibrated in billing units, with the kilowatt hour being the most popular (kWh). Typically, they are read once each billing cycle. When energy savings are sought during specific times, some meters may monitor demand, or the highest amount of electricity used during a specific time. Additionally, some meters feature relays for load shedding in response to responses during periods of peak load. The amount of electrical energy consumed by users is measured by a Watt-hour meter, also known as an energy meter. To charge the electricity usage by loads like lights, fans, and other appliances, utilities put these gadgets everywhere, including in households, businesses, and organizations. Watts are a fundamental power unit. A kilowatt is equal to one thousand watts. One kilowatt is regarded as one unit of energy used if used for one hour. These meters calculate the product of the instantaneous voltage and current readings and provide instantaneous power. This power is distributed over a period and is used during that time. Depending on the supply used by home or commercial installations, these may be single or three phase meters. These can be linked directly between line and load for minor service measurements, such as home consumers. However, step-down current transformers must be installed for greater loads to handle their higher current demands.
Resumo:
Conventional tilted implants are used in oral rehabilitation for heavily absorbed maxilla to avoid bone grafts; however, few research studies evaluate the biomechanical behavior when different angulations of the implants are used. The aim of this study was evaluate, trough photoelastic method, two different angulations and length of the cantilever in fixed implant-supported maxillary complete dentures. Two groups were evaluated: G15 (distal tilted implants 15°) and G35 (distal tilted implants 35°) n = 6. For each model, 2 distal tilted implants (3.5 x 15 mm long cylindrical cone) and 2 parallel tilted implants in the anterior region (3.5 x 10 mm) were installed. Photoelastic models were submitted to three vertical load tests: in the end of cantilever, in the last pillar and in the all pillars at the same time. We obtained the shear stress by Fringes software and found values for total, cervical and apical stress. The quantitative analysis was performed using the Student tests and Mann-Whitney test; p ≥ 0.05. There is no difference between G15 and G35 for total stress regardless of load type. Analyzing the apical region, G35 reduced strain values considering the distal loads (in the cantilever p = 0.03 and in the last pillar p = 0.02), without increasing the stress level in the cervical region. Considering the load in all pillars, G35 showed higher stress concentration in the cervical region (p = 0.04). For distal loads, G15 showed increase of tension in the apical region, while for load in all pillars, G35 inclination increases stress values in the cervical region.
Resumo:
37
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.