869 resultados para Machine to Machine
Resumo:
Ghost Machine is an encounter between a person and a machine in a suburban shed. The machine reveals the story of a man haunted by an image that seems to shift and change as it sits on the wall of his study. In an attempt to locate the origins of the image he builds a viewing machine to finally confront it. Ghost Machine is based on The Mezzotint (1904) by M.R James, retold as a suburban ghost story. It was part of SENSE at Mayfest in Bristol between 24th - 26th May 2013.
Resumo:
This paper introduces the stochastic version of the Geometric Machine Model for the modelling of sequential, alternative, parallel (synchronous) and nondeterministic computations with stochastic numbers stored in a (possibly infinite) shared memory. The programming language L(D! 1), induced by the Coherence Space of Processes D! 1, can be applied to sequential and parallel products in order to provide recursive definitions for such processes, together with a domain-theoretic semantics of the Stochastic Arithmetic. We analyze both the spacial (ordinal) recursion, related to spacial modelling of the stochastic memory, and the temporal (structural) recursion, given by the inclusion relation modelling partial objects in the ordered structure of process construction.
Resumo:
Computers employing some degree of data flow organisation are now well established as providing a possible vehicle for concurrent computation. Although data-driven computation frees the architecture from the constraints of the single program counter, processor and global memory, inherent in the classic von Neumann computer, there can still be problems with the unconstrained generation of fresh result tokens if a pure data flow approach is adopted. The advantages of allowing serial processing for those parts of a program which are inherently serial, and of permitting a demand-driven, as well as data-driven, mode of operation are identified and described. The MUSE machine described here is a structured architecture supporting both serial and parallel processing which allows the abstract structure of a program to be mapped onto the machine in a logical way.
Resumo:
In previous work we showed how to verify a compiler for a small language with exceptions. In this article we show how to calculate, as opposed to verify, an abstract machine for this language. The key step is the use of Reynold's defunctionalization, an old program transformation technique that has recently been rejuvenated by the work of Danvy et al.
Resumo:
The project was made during the Erasmus+ Program in Instituto Superior de Engenharia do Porto, Portugal. I had a pleasure to do this in Gislotica Mechanical Solution, Lda. This document presents a process of design a vertical inspection station for truck tires. The first part contains an introduction. There are information about Gislotica Company and also first analysis of problem. In next part is presented way to figured out the task and described all issues connected with designed machine. In last part were made some conclusions about problems and results. There is a place not only for sum up design process but also my develop during the project. I repeatedly pointed out which issues were new for me. A lot of times I focus on myself and gained experience and information about design process.
Resumo:
The aim of the project was to design in Solidworks and improve an existing Tire inspection machine. The project was developed in Gislotica - Mechanical Solutions, guided by ing. Rui Manuel Fazenda Silva who is a professor in ISEP. The designed device relates to the inspection of automobile tires for holes and weak places caused by punctures and usage. Such inspection includes careful examination of the inside surface of the tire which is difficult because of its cylindrical shape, stiff and resistant nature of the material out of which the tire is made. The whole idea is to provide a machine by which the walls of the tire may be spread and hold apart, presenting the inner surface for the worker to control. The device must also perform rotational and vertical movement of the tire. It is meant to provide inspection in hich there is no need for the controller to use force. It makes his work easier and more efficient.
Resumo:
Computational intelligent support for decision making is becoming increasingly popular and essential among medical professionals. Also, with the modern medical devices being capable to communicate with ICT, created models can easily find practical translation into software. Machine learning solutions for medicine range from the robust but opaque paradigms of support vector machines and neural networks to the also performant, yet more comprehensible, decision trees and rule-based models. So how can such different techniques be combined such that the professional obtains the whole spectrum of their particular advantages? The presented approaches have been conceived for various medical problems, while permanently bearing in mind the balance between good accuracy and understandable interpretation of the decision in order to truly establish a trustworthy ‘artificial’ second opinion for the medical expert.
Resumo:
Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms.
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
Although electrical neurostimulation has been proposed as an alternative treatment for drug-resistant cases of epilepsy, current procedures such as deep brain stimulation, vagus, and trigeminal nerve stimulation are effective only in a fraction of the patients. Here we demonstrate a closed loop brain-machine interface that delivers electrical stimulation to the dorsal column (DCS) of the spinal cord to suppress epileptic seizures. Rats were implanted with cortical recording microelectrodes and spinal cord stimulating electrodes, and then injected with pentylenetetrazole to induce seizures. Seizures were detected in real time from cortical local field potentials, after which DCS was applied. This method decreased seizure episode frequency by 44% and seizure duration by 38%. We argue that the therapeutic effect of DCS is related to modulation of cortical theta waves, and propose that this closed-loop interface has the potential to become an effective and semi-invasive treatment for refractory epilepsy and other neurological disorders.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2010
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Design Optimization of Modern Machine-drive Systems for Maximum Fault Tolerant and Optimal Operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.
Resumo:
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.
Resumo:
Developers strive to create innovative Artificial Intelligence (AI) behaviour in their games as a key selling point. Machine Learning is an area of AI that looks at how applications and agents can be programmed to learn their own behaviour without the need to manually design and implement each aspect of it. Machine learning methods have been utilised infrequently within games and are usually trained to learn offline before the game is released to the players. In order to investigate new ways AI could be applied innovatively to games it is wise to explore how machine learning methods could be utilised in real-time as the game is played, so as to allow AI agents to learn directly from the player or their environment. Two machine learning methods were implemented into a simple 2D Fighter test game to allow the agents to fully showcase their learned behaviour as the game is played. The methods chosen were: Q-Learning and an NGram based system. It was found that N-Grams and QLearning could significantly benefit game developers as they facilitate fast, realistic learning at run-time.