953 resultados para SOFTWARE APPLICATIONS
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
Some organizations end up reimplementing the same class of business process over and over: an "administrative process", which consists of managing a form through several states and involving various roles in the organization. This results in wasted time that could be dedicated to better understanding the process or dealing with the fine details that are specific to the process. Existing virtual office solutions require specific training and infrastructure andmay result in vendor lock-in. In this paper, we propose using a high-level domain-specific language (AdminDSL) to describe the administrative process and a separate code generator targeting a standard web framework. We have implemented the approach using Xtext, EGL and the Django web framework, and we illustrate it through two case studies: a synthetic examination process which illustrates the architecture of the generated code, and a real-world workplace survey process that identified several future avenues for improvement.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
Semantic relations are an important element in the construction of ontology-based linguistic resources and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations, abstractions that are not enough to represent the relation richness of problem domains, and even poorly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the representation of relations. Moreover, the initiatives aiming to provide relations with clear semantics are explained and the inclusion of their core ideas as part of a methodology for the development of ontology-based linguistic resources is proposed.
Resumo:
Kern der vorliegenden Arbeit ist die Modellierung komplexer Webapplikationen mit dem Story-Driven-Modeling Ansatz.Ziel ist es hierbei,die komplette Applikation allein durch die Spezifikation von Modellen zu entwickeln. Das händische Erstellen von Quelltext ist nicht notwendig. Die vorliegende Arbeit zeigt sowohl den Forschungsweg, der die angestrebte Modellierung von Webapplikationen ermöglicht, als auch die resultierenden Ergebnisse auf. Zur Unterstützung des Entwicklungsprozesses wird weiterhin ein modellgetriebener Softwareentwicklungsprozess vorgestellt, der die Modellierung einer Webapplikation von der Aufnahme der Anforderungen, bis zur abschließenden Erzeugung des Quelltextes durch Codegenerierung aus den spezifizierten Modellen, abdeckt. Für den definierten Prozess wird ferner Werkzeugunterstützung innerhalb der Fujaba Toolsuite bereitgestellt. Im Rahmen der vorliegenden Arbeit wurde die bestehede Toolsuite hierzu um alle zur Prozessunterstützung notwendigen Werkzeuge erweitert. Darüber hinaus wurden im Rahmen der vorliegenden Arbeit die in Fujaba bestehenden Werkzeuge erweitert, um neben den klassischen Möglichkeiten zur Modellierung komplexer Java-Applikationen auch die Erzeugung von Webapplikationen zu ermöglichen. Neben der genauen Beschreibung des Entwicklungsprozesses werden im Rahmen dieser Arbeit die entstehenden Webapplikationen mit ihren spezifischen Eigenschaften genau beschrieben. Zur Erzeugung dieser Applikationen wird neben dem Entwicklungsprozess die Diagrammart der Workflowdiagramme eingeführt und beschrieben. Diese Diagramme dienen der Abbildung des intendierten Userworkflows der Applikation im Rahmen der Anforderungsanalyse und stellen im weiteren Entwicklungsverlauf ein dediziertes Entwicklungsartefakt dar. Basierend auf den Workflowdiagrammen werden sowohl die grafische Benutzerschnittstelle der Webapplikation beschrieben, als auch ein Laufzeitsystem initialisiert, welches basierend auf den im Workflowdiagramm abgebildeten Abläufen die Anwendung steuert. Dieses Laufzeitsystem wurde im Rahmen der vorliegenden Arbeit entwickelt und in der Prozessunterstützung verankert. Alle notwendigen Änderungen und Anpassungen und Erweiterungen an bereits bestehenden Teilen der Fujaba Toolsuite werden unter dem Aspekt der Erstellung clientseitiger Datenmodelle einer Webapplikation genau beschrieben und in Verbindung mit den zu erfüllenden Voraussetzungen erläutert. In diesem Zusammenhang wird ebenfalls beschrieben, wie Graphtransformationen zur Umsetzung von Businesslogik auf der Clientseite einer Webapplikation eingesetzt werden können und auf welche Weise Datenmodelländerungen zwischen unterschiedlichen Clients synchronisiert werden können. Insgesamt zeigt die vorliegende Arbeit einen Weg auf, den bestehenden Ansatz des StoryDriven-Modeling für die Erzeugung von Webapplikationen einzusetzen. Durch die im Rahmen dieser Arbeit beschriebene Herangehensweise werden hierbei gleichzeitig Webbrowser zu einer neuen Klasse von Graphersetzungs-Engines erweitert, indem Graphtransformationen innerhalb der Ajax-Engine des Browsers ausgeliefert und ausgeführt werden.
Modelos estocásticos de crescimento individual e desenvolvimento de software de estimação e previsão
Resumo:
Os modelos de crescimento individual são geralmente adaptações de modelos de crescimento de populações. Inicialmente estes modelos eram apenas determinísticos, isto é, não incorporavam as flutuações aleatórias do ambiente. Com o desenvolvimento da teoria do cálculo estocástico podemos adicionar um termo estocástico, que representa a aleatoriedade ambiental que influencia o processo em estudo. Actualmente, o estudo do crescimento individual em ambiente aleatório é cada vez mais importante, não apenas pela vertente financeira, mas também devido às suas aplicações nas áreas da saúde e da pecuária, entre outras. Problemas como o ajustamento de modelos de crescimento individual, estimação de parâmetros e previsão de tamanhos futuros são tratados neste trabalho. São apresentadas novas aplicações do modelo estocástico monomolecular generalizado e um novo software de aplicação deste e de outros modelos. ABSTRACT: Individual growth models are usually adaptations of growth population models. Initially these models were only deterministic, that is, they did not incorporate the random fluctuations of the environment. With the development of the theory of stochastic calculus, we can add a stochastic term that represents the random environmental influences in the process under study. Currently, the study of individual growth in a random environment is increasingly important, not only by the financial scope but also because of its applications in health care and livestock production, among others. Problems such as adjustment of an individual growth model, estimation of parameters and prediction of future sizes are treated in this work. New applications of the generalized stochastic monomolecular model and a new software applied to this and other models are presented.
Resumo:
The primary aim of the research activity presented in this PhD thesis was the development of an innovative hardware and software solution for creating a unique tool for kinematics and electromyographic analysis of the human body in an ecological setting. For this purpose, innovative algorithms have been proposed regarding different aspects of inertial and magnetic data elaboration: magnetometer calibration and magnetic field mapping (Chapter 2), data calibration (Chapter 3) and sensor-fusion algorithm. Topics that may conflict with the confidentiality agreement between University of Bologna and NCS Lab will not be covered in this thesis. After developing and testing the wireless platform, research activities were focused on its clinical validation. The first clinical study aimed to evaluate the intra and interobserver reproducibility in order to evaluate three-dimensional humero-scapulo-thoracic kinematics in an outpatient setting (Chapter 4). A second study aimed to evaluate the effect of Latissimus Dorsi Tendon Transfer on shoulder kinematics and Latissimus Dorsi activation in humerus intra - extra rotations (Chapter 5). Results from both clinical studies have demonstrated the ability of the developed platform to enter into daily clinical practice, providing useful information for patients' rehabilitation.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
The convergence between the recent developments in sensing technologies, data science, signal processing and advanced modelling has fostered a new paradigm to the Structural Health Monitoring (SHM) of engineered structures, which is the one based on intelligent sensors, i.e., embedded devices capable of stream processing data and/or performing structural inference in a self-contained and near-sensor manner. To efficiently exploit these intelligent sensor units for full-scale structural assessment, a joint effort is required to deal with instrumental aspects related to signal acquisition, conditioning and digitalization, and those pertaining to data management, data analytics and information sharing. In this framework, the main goal of this Thesis is to tackle the multi-faceted nature of the monitoring process, via a full-scale optimization of the hardware and software resources involved by the {SHM} system. The pursuit of this objective has required the investigation of both: i) transversal aspects common to multiple application domains at different abstraction levels (such as knowledge distillation, networking solutions, microsystem {HW} architectures), and ii) the specificities of the monitoring methodologies (vibrations, guided waves, acoustic emission monitoring). The key tools adopted in the proposed monitoring frameworks belong to the embedded signal processing field: namely, graph signal processing, compressed sensing, ARMA System Identification, digital data communication and TinyML.
Resumo:
This thesis is based on two studies that are related to floating wave energy conversion (WEC) devices and turbulent fountains. The ability of the open-source CFD software OpenFOAM® has been studied to simulate these phenomena. The CFD model has been compared with the physical experimental results. The first study presents a model of a WEC device, called MoonWEC, which is patented by the University of Bologna. The CFD model of the MoonWEC under the action of waves has been simulated using OpenFOAM and the results are promising. The reliability of the CFD model is confirmed by the laboratory experiments, conducted at the University of Bologna, for which a small-scale prototype of the MoonWEC was made from wood and brass. The second part of the thesis is related to the turbulent fountains which are formed when a heavier source fluid is injected upward into a lighter ambient fluid, or else a lighter source fluid is injected downward into a heavier ambient fluid. For this study, the first case is considered for laboratory experiments and the corresponding CFD model. The vertical releases of the source fluids into a quiescent, uniform ambient fluid, from a circular source, were studied with different densities in the laboratory experiments, conducted at the University of Parma. The CFD model has been set up for these experiments. Favourable results have been observed from the OpenFOAM simulations for the turbulent fountains as well, indicating that it can be a reliable tool for the simulation of such phenomena.
Resumo:
Isolated DC-DC converters play a significant role in fast charging and maintaining the variable output voltage for EV applications. This study aims to investigate the different Isolated DC-DC converters for onboard and offboard chargers, then, once the topology is selected, study the control techniques and, finally, achieve a real-time converter model to accomplish Hardware-In-The-Loop (HIL) results. Among the different isolated DC-DC topologies, the Dual Active Bridge (DAB) converter has the advantage of allowing bidirectional power flow, which enables operating in both Grid to Vehicle (G2V) and Vehicle to Grid (V2G) modalities. Recently, DAB has been used in the offboard chargers for high voltage applications due to SiC and GaN MOSFETs; this new technology also allows the utilization of higher switching frequencies. By empowering soft switching techniques to reduce switching losses, higher switching frequency operation is possible in DAB. There are four phase shift control techniques for the DAB converter. They are Single Phase shift, Extended Phase shift, Dual Phase shift, Triple Phase shift controls. This thesis considers two control strategies; Single-Phase, and Dual-Phase shifts, to understand the circulating currents, power losses, and output capacitor size reduction in the DAB. Hardware-In-The-Loop (HIL) experiments are carried out on both controls with high switching frequencies using the PLECS software tool and the RT box supporting the PLECS. Root Mean Square Error is also calculated for steady-state values of output voltage with different sampling frequencies in both the controls to identify the achievable sampling frequency in real-time. DSP implementation is also executed to emulate the optimized DAB converter design, and final real-time simulation results are discussed for both the Single-Phase and Dual-Phase shift controls.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Agricultural techniques have been improved over the centuries to match with the growing demand of an increase in global population. Farming applications are facing new challenges to satisfy global needs and the recent technology advancements in terms of robotic platforms can be exploited. As the orchard management is one of the most challenging applications because of its tree structure and the required interaction with the environment, it was targeted also by the University of Bologna research group to provide a customized solution addressing new concept for agricultural vehicles. The result of this research has blossomed into a new lightweight tracked vehicle capable of performing autonomous navigation both in the open-filed scenario and while travelling inside orchards for what has been called in-row navigation. The mechanical design concept, together with customized software implementation has been detailed to highlight the strengths of the platform and some further improvements envisioned to improve the overall performances. Static stability testing has proved that the vehicle can withstand steep slopes scenarios. Some improvements have also been investigated to refine the estimation of the slippage that occurs during turning maneuvers and that is typical of skid-steering tracked vehicles. The software architecture has been implemented using the Robot Operating System (ROS) framework, so to exploit community available packages related to common and basic functions, such as sensor interfaces, while allowing dedicated custom implementation of the navigation algorithm developed. Real-world testing inside the university’s experimental orchards have proven the robustness and stability of the solution with more than 800 hours of fieldwork. The vehicle has also enabled a wide range of autonomous tasks such as spraying, mowing, and on-the-field data collection capabilities. The latter can be exploited to automatically estimate relevant orchard properties such as fruit counting and sizing, canopy properties estimation, and autonomous fruit harvesting with post-harvesting estimations.
Resumo:
In this thesis, the study and the simulation of two advanced sensorless speed control techniques for a surface PMSM are presented. The aim is to implement a sensorless control algorithm for a submarine auxiliary propulsion system. This experimental activity is the result of a project collaboration with L3Harris Calzoni, a leader company in A&D systems for naval handling in military field. A Simulink model of the whole electric drive has been developed. Due to the satisfactory results of the simulations, the sensorless control system has been implemented in C code for STM32 environment. Finally, several tests on a real brushless machine have been carried out while the motor was connected to a mechanical load to simulate the real scenario of the final application. All the experimental results have been recorded through a graphical interface software developed at Calzoni.
Resumo:
The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.