675 resultados para ”real world mathematics”


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to ever increasing transportation of people and goods, automatic traffic surveillance is becoming a key issue for both providing safety to road users and improving traffic control in an efficient way. In this paper, we propose a new system that, exploiting the capabilities that both computer vision and machine learning offer, is able to detect and track different types of real incidents on a highway. Specifically, it is able to accurately detect not only stopped vehicles, but also drivers and passengers leaving the stopped vehicle, and other pedestrians present in the roadway. Additionally, a theoretical approach for detecting vehicles which may leave the road in an unexpected way is also presented. The system works in real-time and it has been optimized for working outdoor, being thus appropriate for its deployment in a real-world environment like a highway. First experimental results on a dataset created with videos provided by two Spanish highway operators demonstrate the effectiveness of the proposed system and its robustness against noise and low-quality videos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The mathematical models of the complex reality are texts belonging to a certain literature that is written in a semi-formal language, denominated L(MT) by the authors whose laws linguistic mathematics have been previously defined. This text possesses linguistic entropy that is the reflection of the physical entropy of the processes of real world that said text describes. Through the temperature of information defined by Mandelbrot, the authors begin a text-reality thermodynamic theory that drives to the existence of information attractors, or highly structured point, settling down a heterogeneity of the space text, the same one that of ontologic space, completing the well-known law of Saint Mathew, of the General Theory of Systems and formulated by Margalef saying: “To the one that has more he will be given, and to the one that doesn't have he will even be removed it little that it possesses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an application of decoupled probabilistic world modeling to achieve team planning. The research is based on the principle that tbe action selection mechanism of a member in a robot team cm select am effective action if a global world model is available to all team members. In the real world, the sensors are imprecise, and are individual to each robot, hence providing each robot a partial and unique view about the environment. We address this problem by creating a probabilistic global view on each agent by combining the perceptual information from each robot. This probsbilistie view forms the basis for selecting actions to achieve the team goal in a dynamic environment. Experiments have been carried ont to investigate the effectiveness of this principle using custom-built robots for real world performance, in addition, to extensive simulation results. The results show an improvement in team effectiveness when using probabilistic world modeling based on perception sharing for team planning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is supported by the Hungarian Scientific Research Fund (OTKA), grant T042706.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016

Relevância:

90.00% 90.00%

Publicador:

Resumo:

'Takes the challenging and makes it understandable. The book contains useful advice on the application of statistics to a variety of contexts and shows how statistics can be used by managers in their work.' - Dr Terri Byers, Assistant Professor, University Of New Brunswick, Canada A book about introductory quantitative analysis for business students designed to be read by first- and second-year students on a business studies degree course that assumes little or no background in mathematics or statistics. Based on extensive knowledge and experience in how people learn and in particular how people learn mathematics, the authors show both how and why quantitative analysis is useful in the context of business and management studies, encouraging readers to not only memorise the content but to apply learning to typical problems. Fully up-to-date with comprehensive coverage of IBM SPSS and Microsoft Excel software, the tailored examples illustrate how the programmes can be used, and include step-by-step figures and tables throughout. A range of ‘real world’ and fictional examples, including "The Ballad of Eddie the Easily Distracted" and "Esha's Story" help bring the study of statistics alive. A number of in-text boxouts can be found throughout the book aimed at readers at varying levels of study and understanding •Back to Basics for those struggling to understand, explain concepts in the most basic way possible - often relating to interesting or humorous examples •Above and Beyond for those racing ahead and who want to be introduced to more interesting or advanced concepts that are a little bit outside of what they may need to know •Think it over get students to stop, engage and reflect upon the different connections between topics A range of online resources including a set of data files and templates for the reader following in-text examples, downloadable worksheets and instructor materials, answers to in-text exercises and video content compliment the book.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A dolgozat célja egy vállalati gyakorlatból származó eset elemzése. Egy könyvkiadót tekintünk. A kiadó kapcsolatban van kis- és nagykereskedőkkel, valamint a fogyasztók egy csoportjával is vannak kapcsolatai. A könyvkiadók projekt rendszerben működnek. A kiadó azzal a problémával szembesül, hogy hogyan ossza el egy frissen kiadott és nyomtatott könyv példányszámait a kis- és nagykereskedők között, valamint mekkora példányszámot tároljon maga a fogyasztók közvetlen kielégítésére. A kiadóról feltételezzük, hogy visszavásárlási szerződése van a kereskedőkkel. A könyv iránti kereslet nem ismert, de becsülhető. A kis- és nagykereskedők maximalizálják a nyereségüket. = The aim of the paper is to analyze a practical real world problem. A publishing house is given. The publishing firm has contacts to a number of wholesaler / retailer enterprises and direct contact to customers to satisfy the market demand. The book publishers work in a project industry. The publisher faces with the problem how to allocate the stocks of a given, newly published book to the wholesaler and retailer, and to hold some copies to satisfy the customers direct from the publisher. The publisher has a buyback option. The distribution of the demand is unknown, but it can be estimated. The wholesaler / retailer maximize the profits. The problem can be modeled as a one-warehouse and N-retailer supply chain with not identical demand distribution. The model can be transformed in a game theory problem. It is assumed that the demand distribution follows a Poisson distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Egy könyvkiadó vállalatot vizsgálunk. A kiadó kiadványait a szokásos értékesítési láncon (kis- és nagykereskedelem) keresztül értékesíti. A kérdés az, hogy egy új könyv példányait hogyan allokálja az értékesítési láncban. Feltételezzük, hogy a kereslet ismert, Poisson-eloszlású. A készletezés költségeit szintén ismertnek tételezzük fel. Cél a költségek minimalizálása. = The aim of the paper is to analyze a practical real world problem. A publishing house is given. The publishing firm has contacts to a number of wholesaler / retailer enterprises and direct contact to customers to satisfy the market demand. The book publishers work in a project industry. The publisher faces with the problem to allocate the stocks of a given, newly published book to the wholesaler and retailer, and to hold some copies to satisfy the customers direct from the publisher. The distribution of the demand is unknown, but it can be estimated. The costs consist of inventory holding and shortage, backorder costs. The decision maker wants to minimize these relevant costs. The problem can be modeled as a one-warehouse and N-retailer supply chain with not identical demand distribution. The problem structure is similar that of a newsvendor model. It is assumed that the demand distribution follows a Poisson distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A készpénz-optimalizálás az operációkutatás régóta kutatott területe. Ebben a cikkben valós adatokon mutatok be egy banki készpénz-optimalizálást, melyet lineáris programozási feladatok segítségével végeztem el. A cikkben összehasonlítottam a determinisztikus és a sztochasztikus megközelítéseket is. A hagyományos készpénz-optimalizáción két területen léptem túl: egyrészt vizsgáltam a bankfiók valutagazdálkodását is, másrészről a bankfiókok közötti készpénzszállítás lehetőségét is. A vegyes egészértékű lineáris programozási feladatok megoldására a glpk nevű szabad hozzáférésű szoftvert használtam, így a cikkből képet kaphatunk a megoldó (solver) felhasználhatóságáról és korlátairól is. ___________ In recent years both operational research and quantitative ¯nance have paid much attention to cash management issues. In this paper we present a cash management study which is based on real world data and uses a mixed integer linear programming (MILP) model as the main tool. In the paper we compare deterministic and stochastic approaches. The classical cash management problem is extended in two ways: we considered the possibility of bank offices keeping more than one currency and also investigated the opportunity of cash transports between bank offices. The MILP problem was solved with glpk (GNU Linear Programming Kit), a free software. The reader can also get a feel of how to use this solver.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as “histogram binning” inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. ^ Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. ^ The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. ^ These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. ^ In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.