920 resultados para Planing-machines.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the nice properties of kernel classifiers such as SVMs is that they often produce sparse solutions. However, the decision functions of these classifiers cannot always be used to estimate the conditional probability of the class label. We investigate the relationship between these two properties and show that these are intimately related: sparseness does not occur when the conditional probabilities can be unambiguously estimated. We consider a family of convex loss functions and derive sharp asymptotic results for the fraction of data that becomes support vectors. This enables us to characterize the exact trade-off between sparseness and the ability to estimate conditional probabilities for these loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have used microarray gene expression profiling and machine learning to predict the presence of BRAF mutations in a panel of 61 melanoma cell lines. The BRAF gene was found to be mutated in 42 samples (69%) and intragenic mutations of the NRAS gene were detected in seven samples (11%). No cell line carried mutations of both genes. Using support vector machines, we have built a classifier that differentiates between melanoma cell lines based on BRAF mutation status. As few as 83 genes are able to discriminate between BRAF mutant and BRAF wild-type samples with clear separation observed using hierarchical clustering. Multidimensional scaling was used to visualize the relationship between a BRAF mutation signature and that of a generalized mitogen-activated protein kinase (MAPK) activation (either BRAF or NRAS mutation) in the context of the discriminating gene list. We observed that samples carrying NRAS mutations lie somewhere between those with or without BRAF mutations. These observations suggest that there are gene-specific mutation signals in addition to a common MAPK activation that result from the pleiotropic effects of either BRAF or NRAS on other signaling pathways, leading to measurably different transcriptional changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RÉSUMÉ. La prise en compte des troubles de la communication dans l’utilisation des systèmes de recherche d’information tels qu’on peut en trouver sur le Web est généralement réalisée par des interfaces utilisant des modalités n’impliquant pas la lecture et l’écriture. Peu d’applications existent pour aider l’utilisateur en difficulté dans la modalité textuelle. Nous proposons la prise en compte de la conscience phonologique pour assister l’utilisateur en difficulté d’écriture de requêtes (dysorthographie) ou de lecture de documents (dyslexie). En premier lieu un système de réécriture et d’interprétation des requêtes entrées au clavier par l’utilisateur est proposé : en s’appuyant sur les causes de la dysorthographie et sur les exemples à notre disposition, il est apparu qu’un système combinant une approche éditoriale (type correcteur orthographique) et une approche orale (système de transcription automatique) était plus approprié. En second lieu une méthode d’apprentissage automatique utilise des critères spécifiques , tels que la cohésion grapho-phonémique, pour estimer la lisibilité d’une phrase, puis d’un texte. ABSTRACT. Most applications intend to help disabled users in the information retrieval process by proposing non-textual modalities. This paper introduces specific parameters linked to phonological awareness in the textual modality. This will enhance the ability of systems to deal with orthographic issues and with the adaptation of results to the reader when for example the reader is dyslexic. We propose a phonology based sentence level rewriting system that combines spelling correction, speech synthesis and automatic speech recognition. This has been evaluated on a corpus of questions we get from dyslexic children. We propose a specific sentence readability measure that involves phonetic parameters such as grapho-phonemic cohesion. This has been learned on a corpus of reading time of sentences read by dyslexic children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research explores music in space, as experienced through performing and music-making with interactive systems. It explores how musical parameters may be presented spatially and displayed visually with a view to their exploration by a musician during performance. Spatial arrangements of musical components, especially pitches and harmonies, have been widely studied in the literature, but the current capabilities of interactive systems allow the improvisational exploration of these musical spaces as part of a performance practice. This research focuses on quantised spatial organisation of musical parameters that can be categorised as grid music systems (GMSs), and interactive music systems based on them. The research explores and surveys existing and historical uses of GMSs, and develops and demonstrates the use of a novel grid music system designed for whole body interaction. Grid music systems provide plotting of spatialised input to construct patterned music on a two-dimensional grid layout. GMSs are navigated to construct a sequence of parametric steps, for example a series of pitches, rhythmic values, a chord sequence, or terraced dynamic steps. While they are conceptually simple when only controlling one musical dimension, grid systems may be layered to enable complex and satisfying musical results. These systems have proved a viable, effective, accessible and engaging means of music-making for the general user as well as the musician. GMSs have been widely used in electronic and digital music technologies, where they have generally been applied to small portable devices and software systems such as step sequencers and drum machines. This research shows that by scaling up a grid music system, music-making and musical improvisation are enhanced, gaining several advantages: (1) Full body location becomes the spatial input to the grid. The system becomes a partially immersive one in four related ways: spatially, graphically, sonically and musically. (2) Detection of body location by tracking enables hands-free operation, thereby allowing the playing of a musical instrument in addition to “playing” the grid system. (3) Visual information regarding musical parameters may be enhanced so that the performer may fully engage with existing spatial knowledge of musical materials. The result is that existing spatial knowledge is overlaid on, and combined with, music-space. Music-space is a new concept produced by the research, and is similar to notions of other musical spaces including soundscape, acoustic space, Smalley's “circumspace” and “immersive space” (2007, 48-52), and Lotis's “ambiophony” (2003), but is rather more textural and “alive”—and therefore very conducive to interaction. Music-space is that space occupied by music, set within normal space, which may be perceived by a person located within, or moving around in that space. Music-space has a perceivable “texture” made of tensions and relaxations, and contains spatial patterns of these formed by musical elements such as notes, harmonies, and sounds, changing over time. The music may be performed by live musicians, created electronically, or be prerecorded. Large-scale GMSs have the capability not only to interactively display musical information as music representative space, but to allow music-space to co-exist with it. Moving around the grid, the performer may interact in real time with musical materials in music-space, as they form over squares or move in paths. Additionally he/she may sense the textural matrix of the music-space while being immersed in surround sound covering the grid. The HarmonyGrid is a new computer-based interactive performance system developed during this research that provides a generative music-making system intended to accompany, or play along with, an improvising musician. This large-scale GMS employs full-body motion tracking over a projected grid. Playing with the system creates an enhanced performance employing live interactive music, along with graphical and spatial activity. Although one other experimental system provides certain aspects of immersive music-making, currently only the HarmonyGrid provides an environment to explore and experience music-space in a GMS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to their large surface area, complex chemical composition and high alveolar deposition rate, ultrafine particles (UFPs) (< 0.1 ìm) pose a significant risk to human health and their toxicological effects have been acknowledged by the World Health Organisation. Since people spend most of their time indoors, there is a growing concern about the UFPs present in some indoor environments. Recent studies have shown that office machines, in particular laser printers, are a significant indoor source of UFPs. The majority of printer-generated UFPs are organic carbon and it is unlikely that these particles are emitted directly from the printer or its supplies (such as paper and toner powder). Thus, it was hypothesised that these UFPs are secondary organic aerosols (SOA). Considering the widespread use of printers and human exposure to these particles, understanding the processes involved in particle formation is of critical importance. However, few studies have investigated the nature (e.g. volatility, hygroscopicity, composition, size distribution and mixing state) and formation mechanisms of these particles. In order to address this gap in scientific knowledge, a comprehensive study including state-of-art instrumental methods was conducted to characterise the real-time emissions from modern commercial laser printers, including particles, volatile organic compounds (VOCs) and ozone (O3). The morphology, elemental composition, volatility and hygroscopicity of generated particles were also examined. The large set of experimental results was analysed and interpreted to provide insight into: (1) Emissions profiles of laser printers: The results showed that UFPs dominated the number concentrations of generated particles, with a quasi unimodal size distribution observed for all tests. These particles were volatile, non-hygroscopic and mixed both externally and internally. Particle microanalysis indicated that semi-volatile organic compounds occupied the dominant fraction of these particles, with only trace quantities of particles containing Ca and Fe. Furthermore, almost all laser printers tested in this study emitted measurable concentrations of VOCs and O3. A positive correlation between submicron particles and O3 concentrations, as well as a contrasting negative correlation between submicron particles and total VOC concentrations were observed during printing for all tests. These results proved that UFPs generated from laser printers are mainly SOAs. (2) Sources and precursors of generated particles: In order to identify the possible particle sources, particle formation potentials of both the printer components (e.g. fuser roller and lubricant oil) and supplies (e.g. paper and toner powder) were investigated using furnace tests. The VOCs emitted during the experiments were sampled and identified to provide information about particle precursors. The results suggested that all of the tested materials had the potential to generate particles upon heating. Nine unsaturated VOCs were identified from the emissions produced by paper and toner, which may contribute to the formation of UFPs through oxidation reactions with ozone. (3) Factors influencing the particle emission: The factors influencing particle emissions were also investigated by comparing two popular laser printers, one showing particle emissions three orders of magnitude higher than the other. The effects of toner coverage, printing history, type of paper and toner, and working temperature of the fuser roller on particle number emissions were examined. The results showed that the temperature of the fuser roller was a key factor driving the emission of particles. Based on the results for 30 different types of laser printers, a systematic positive correlation was observed between temperature and particle number emissions for printers that used the same heating technology and had a similar structure and fuser material. It was also found that temperature fluctuations were associated with intense bursts of particles and therefore, they may have impact on the particle emissions. Furthermore, the results indicated that the type of paper and toner powder contributed to particle emissions, while no apparent relationship was observed between toner coverage and levels of submicron particles. (4) Mechanisms of SOA formation, growth and ageing: The overall hypothesis that UFPs are formed by reactions with the VOCs and O3 emitted from laser printers was examined. The results proved this hypothesis and suggested that O3 may also play a role in particle ageing. In addition, knowledge about the mixing state of generated particles was utilised to explore the detailed processes of particle formation for different printing scenarios, including warm-up, normal printing, and printing without toner. The results indicated that polymerisation may have occurred on the surface of the generated particles to produce thermoplastic polymers, which may account for the expandable characteristics of some particles. Furthermore, toner and other particle residues on the idling belt from previous print jobs were a very clear contributing factor in the formation of laser printer-emitted particles. In summary, this study not only improves scientific understanding of the nature of printer-generated particles, but also provides significant insight into the formation and ageing mechanisms of SOAs in the indoor environment. The outcomes will also be beneficial to governments, industry and individuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software forms an important part of the interface between citizens and their government. An increasing amount of government functions are being performed, controlled, or delivered electronically. This software, like all language, is never value-neutral, but must, to some extent, reflect the values of the coder and proprietor. The move that many governments are making towards e-governance, and the increasing reliance that is being placed upon software in government, necessitates a rethinking of the relationships of power and control that are embodied in software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main aim of this thesis is to analyse and optimise a public hospital Emergency Department. The Emergency Department (ED) is a complex system with limited resources and a high demand for these resources. Adding to the complexity is the stochastic nature of almost every element and characteristic in the ED. The interaction with other functional areas also complicates the system as these areas have a huge impact on the ED and the ED is powerless to change them. Therefore it is imperative that OR be applied to the ED to improve the performance within the constraints of the system. The main characteristics of the system to optimise included tardiness, adherence to waiting time targets, access block and length of stay. A validated and verified simulation model was built to model the real life system. This enabled detailed analysis of resources and flow without disruption to the actual ED. A wide range of different policies for the ED and a variety of resources were able to be investigated. Of particular interest was the number and type of beds in the ED and also the shift times of physicians. One point worth noting was that neither of these resources work in isolation and for optimisation of the system both resources need to be investigated in tandem. The ED was likened to a flow shop scheduling problem with the patients and beds being synonymous with the jobs and machines typically found in manufacturing problems. This enabled an analytic scheduling approach. Constructive heuristics were developed to reactively schedule the system in real time and these were able to improve the performance of the system. Metaheuristics that optimised the system were also developed and analysed. An innovative hybrid Simulated Annealing and Tabu Search algorithm was developed that out-performed both simulated annealing and tabu search algorithms by combining some of their features. The new algorithm achieves a more optimal solution and does so in a shorter time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is generally understood that the patent system exists to encourage the conception and disclosure of new and useful inventions embodied in machines and other physical devices, along with new methods that physically transform matter from one state to another. What is not well understood is whether, and to what extent, the patent system is to encourage and protect the conception and disclosure of inventions that are non-physical methods – namely those that do not result in a physical transformation of matter. This issue was considered in Grant v Commissioner of Patents. In that case the Full Court of the Federal Court of Australia held that an invention must involve a physical effect or transformation to be patentable subject matter. In doing so, it introduced a physicality requirement into Australian law. What this article seeks to establish is whether the court’s decision is consistent with the case law on point. It does so by examining the key common law cases that followed the High Court’s watershed decision in National Research Development Corporation v Commissioner of Patents, the undisputed authoritative statement of principle in regard to the patentable subject matter standard in Australia. This is done with a view to determining whether there is anything in those cases that supports the view that the Australian patentable subject matter test contains a physicality requirement.