28 resultados para automated full waveform logging system
Resumo:
The purpose of this bachelor's thesis is the development of online community. Nowadays Internet lets user to collaborate and share information online. Internet is also full of communities and the number of community users is continuously rising. Companies have also noticed this and want to make use of it. The result of the work was an online community for the use of PROFCOM research project. At the same time information was gathered about what kind of platforms are available as a backbone for an online community. Designing and developing of the online community provided experience about Drupal-environment. It also gave pros and cons of Drupal’s features. Drupal is a multifunctional software, which can handle big online communities, but its installation and maintenance is, however, reasonably simple.
Resumo:
During the past decades testing has matured from ad-hoc activity into being an integral part of the development process. The benefits of testing are obvious for modern communication systems, which operate in heterogeneous environments amongst devices from various manufacturers. The increased demand for testing also creates demand for tools and technologies that support and automate testing activities. This thesis discusses applicability of visualization techniques in the result analysis part of the testing process. Particularly, the primary focus of this work is visualization of test execution logs produced by a TTCN-3 test system. TTCN-3 is an internationally standardized test specification and implementation language. The TTCN-3 standard suite includes specification of a test logging interface and a graphical presentation format, but no immediate relationship between them. This thesis presents a technique for mapping the log events to the graphical presentation format along with a concrete implementation, which is integrated with the Eclipse Platform and the OpenTTCN Tester toolchain. Results of this work indicate that for majority of the log events, a visual representation may be derived from the TTCN-3 standard suite. The remaining events were analysed and three categories relevant in either log analysis or implementation of the visualization tool were identified: events indicating insertion of something into the incoming queue of a port, events indicating a mismatch and events describing the control flow during the execution. Applicability of the results is limited into the domain of TTCN-3, but the developed mapping and the implementation may be utilized with any TTCN-3 tool that is able to produce the execution log in the standardized XML format.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
This master’s thesis is focused on the active magnetic bearings system commissioning. The scope of the work is to test the existent procedures with old and new prototypes of an AMB system and additionally automate necessary steps instead of their hand tuning, because determination of rotor clearances and finding effective rotor-origins are time consuming and error prone. The final goal is to get a documented and mostly automated step by step methodology for end efficient system’s commissioning.
Resumo:
The problem of software (SW) defaults is becoming more and more topical because of increasing amount of the SW and its complication. The majority of these defaults are founded during the test part that consumes about 40-50% of the development efforts. Test automation allows reducing the cost of this process and increasing testing effectiveness. In the middle of 1980 the first tools for automated testing appeared and the automated process was implemented in different kinds of SW testing. In short time, it became obviously, automated testing can cause many problems such as increasing product cost, decreasing reliability and even project fail. This thesis describes automated testing process, its concept, lists main problems, and gives an algorithm for automated test tools selection. Also this work presents an overview of the main automated test tools for embedded systems.
Resumo:
The general trend towards increasing e ciency and energy density drives the industry to high-speed technologies. Active Magnetic Bearings (AMBs) are one of the technologies that allow contactless support of a rotating body. Theoretically, there are no limitations on the rotational speed. The absence of friction, low maintenance cost, micrometer precision, and programmable sti ness have made AMBs a viable choice for highdemanding applications. Along with the advances in power electronics, such as signi cantly improved reliability and cost, AMB systems have gained a wide adoption in the industry. The AMB system is a complex, open-loop unstable system with multiple inputs and outputs. For normal operation, such a system requires a feedback control. To meet the high demands for performance and robustness, model-based control techniques should be applied. These techniques require an accurate plant model description and uncertainty estimations. The advanced control methods require more e ort at the commissioning stage. In this work, a methodology is developed for an automatic commissioning of a subcritical, rigid gas blower machine. The commissioning process includes open-loop tuning of separate parts such as sensors and actuators. The next step is to apply a system identi cation procedure to obtain a model for the controller synthesis. Finally, a robust model-based controller is synthesized and experimentally evaluated in the full operating range of the system. The commissioning procedure is developed by applying only the system components available and a priori knowledge without any additional hardware. Thus, the work provides an intelligent system with a self-diagnostics feature and an automatic commissioning.
Resumo:
In this work the implementation of the active magnetic bearing control system in a single FPGA is studied. Requirements for the full magnetic bearing control system are reviewed. Different control methods for active magnetic bearings are described shortly. Flux and the current base controllers are implemented in a FPGA. Suitability of the con-trollers for a low-cost magnetic bearing application is studied. Floating-point arithmetic’s are used in the controllers to ease designing burden and improve calculation precision. Per-formance of the flux controller is verified with simulations.
Resumo:
The importance of efficient supply chain management has increased due to globalization and the blurring of organizational boundaries. Various supply chain management technologies have been identified to drive organizational profitability and financial performance. Organizations have historically been concentrating heavily on the flow of goods and services, while less attention has been dedicated to the flow of money. While supply chains are becoming more transparent and automated, new opportunities for financial supply chain management have emerged through information technology solutions and comprehensive financial supply chain management strategies. This research concentrates on the end part of the purchasing process which is the handling of invoices. Efficient invoice processing can have an impact on organizations working capital management and thus provide companies with better readiness to face the challenges related to cash management. Leveraging a process mining solution the aim of this research was to examine the automated invoice handling process of four different organizations. The invoice data was collected from each organizations invoice processing system. The sample included all the invoices organizations had processed during the year 2012. The main objective was to find out whether e-invoices are faster to process in an automated invoice processing solution than scanned invoices (post entry into invoice processing solution). Other objectives included looking into the longest lead times between process steps and the impact of manual process steps on cycle time. Processing of invoices from maverick purchases was also examined. Based on the results of the research and previous literature on the subject, suggestions for improving the process were proposed. The results of the research indicate that scanned invoices were processed faster than e-invoices. This is mostly due to the more complex processing of e-invoices. It should be noted however that the manual tasks related to turning a paper invoice into electronic format through scanning are ignored in this research. The transitions with the longest lead times in the invoice handling process included both pre-automated steps as well as manual steps performed by humans. When the most common manual steps were examined in more detail, it was clear that these steps had a prolonging impact on the process. Regarding invoices from maverick purchases the evidence shows that these invoices were slower to process than invoices from purchases conducted through e-procurement systems and from preferred suppliers. Suggestions on how to improve the process included: increasing invoice matching, reducing of manual steps and leveraging of different value added services such as invoice validation service, mobile solutions and supply chain financing services. For companies that have already reaped all the process efficiencies the next step is to engage in collaborative financial supply chain management strategies that can benefit the whole supply chain.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
In this study, an infrared thermography based sensor was studied with regard to usability and the accuracy of sensor data as a weld penetration signal in gas metal arc welding. The object of the study was to evaluate a specific sensor type which measures thermography from solidified weld surface. The purpose of the study was to provide expert data for developing a sensor system in adaptive metal active gas (MAG) welding. Welding experiments with considered process variables and recorded thermal profiles were saved to a database for further analysis. To perform the analysis within a reasonable amount of experiments, the process parameter variables were gradually altered by at least 10 %. Later, the effects of process variables on weld penetration and thermography itself were considered. SFS-EN ISO 5817 standard (2014) was applied for classifying the quality of the experiments. As a final step, a neural network was taught based on the experiments. The experiments show that the studied thermography sensor and the neural network can be used for controlling full penetration though they have minor limitations, which are presented in results and discussion. The results are consistent with previous studies and experiments found in the literature.
Resumo:
At present, in large precast concrete enterprises, the management over precast concrete component has been chaotic. Most enterprises take labor-intensive manual input method, which is time consuming and laborious, and error-prone. Some other slightly better enterprises choose to manage through bar-code or printing serial number manually. However, on one hand, this is also labor-intensive, on the other hand, this method is limited by external environment, making the serial number blur or even lost, and also causes a big problem on production traceability and quality accountability. Therefore, to realize the enterprise’s own rapid development and cater to the needs of the time, to achieve the automated production management has been a big problem for a modern enterprise. In order to solve the problem, inefficiency in production and traceability of the products, this thesis try to introduce RFID technology into the production of PHC tubular pile. By designing a production management system of precast concrete components, the enterprise will achieve the control of the entire production process, and realize the informatization of enterprise production management. RFID technology has been widely used in many fields like entrance control, charge management, logistics and so on. RFID technology will adopt passive RFID tag, which is waterproof, shockproof, anti-interference, so it’s suitable for the actual working environment. The tag will be bound to the precast component steel cage (the structure of the PHC tubular pile before the concrete placement), which means each PHC tubular pile will have a unique ID number. Then according to the production procedure, the precast component will be performed with a series of actions, put the steel cage into the mold, mold clamping, pouring concrete (feed), stretching, centrifugalizing, maintenance, mold removing, welding splice. In every session of the procedure, the information of the precast components can be read through a RFID reader. Using a portable smart device connected to the database, the user can check, inquire and management the production information conveniently. Also, the system can trace the production parameter and the person in charge, realize the traceability of the information. This system can overcome the disadvantages in precast components manufacturers, like inefficiency, error-prone, time consuming, labor intensity, low information relevance and so on. This system can help to improve the production management efficiency, and can produce a good economic and social benefits, so, this system has a certain practical value.
Resumo:
Various environmental management systems, standards and tools are being created to assist companies to become more environmental friendly. However, not all the enterprises have adopted environmental policies in the same scale and range. Additionally, there is no existing guide to help them determine their level of environmental responsibility and subsequently, provide support to enable them to move forward towards environmental responsibility excellence. This research proposes the use of a Belief Rule-Based approach to assess an enterprise’s level commitment to environmental issues. The Environmental Responsibility BRB assessment system has been developed for this research. Participating companies will have to complete a structured questionnaire. An automated analysis of their responses (using the Belief Rule-Based approach) will determine their environmental responsibility level. This is followed by a recommendation on how to progress to the next level. The recommended best practices will help promote understanding, increase awareness, and make the organization greener. BRB systems consist of two parts: Knowledge Base and Inference Engine. The knowledge base in this research is constructed after an in-depth literature review, critical analyses of existing environmental performance assessment models and primarily guided by the EU Draft Background Report on "Best Environmental Management Practice in the Telecommunications and ICT Services Sector". The reasoning algorithm of a selected Drools JBoss BRB inference engine is forward chaining, where an inference starts iteratively searching for a pattern-match of the input and if-then clause. However, the forward chaining mechanism is not equipped with uncertainty handling. Therefore, a decision is made to deploy an evidential reasoning and forward chaining with a hybrid knowledge representation inference scheme to accommodate imprecision, ambiguity and fuzzy types of uncertainties. It is believed that such a system generates well balanced, sensible and Green ICT readiness adapted results, to help enterprises focus on making improvements on more sustainable business operations.