892 resultados para Adding machine
Resumo:
Due to the emergence of multiple language support on the Internet, machine translation (MT) technologies are indispensable to the communication between speakers using different languages. Recent research works have started to explore tree-based machine translation systems with syntactical and morphological information. This work aims the development of Syntactic Based Machine Translation from English to Malayalam by adding different case information during translation. The system identifies general rules for various sentence patterns in English. These rules are generated using the Parts Of Speech (POS) tag information of the texts. Word Reordering based on the Syntax Tree is used to improve the translation quality of the system. The system used Bilingual English –Malayalam dictionary for translation.
Resumo:
Statistical Machine Translation (SMT) is one of the potential applications in the field of Natural Language Processing. The translation process in SMT is carried out by acquiring translation rules automatically from the parallel corpora. However, for many language pairs (e.g. Malayalam- English), they are available only in very limited quantities. Therefore, for these language pairs a huge portion of phrases encountered at run-time will be unknown. This paper focuses on methods for handling such out-of-vocabulary (OOV) words in Malayalam that cannot be translated to English using conventional phrase-based statistical machine translation systems. The OOV words in the source sentence are pre-processed to obtain the root word and its suffix. Different inflected forms of the OOV root are generated and a match is looked up for the word variants in the phrase translation table of the translation model. A Vocabulary filter is used to choose the best among the translations of these word variants by finding the unigram count. A match for the OOV suffix is also looked up in the phrase entries and the target translations are filtered out. Structuring of the filtered phrases is done and SMT translation model is extended by adding OOV with its new phrase translations. By the results of the manual evaluation done it is observed that amount of OOV words in the input has been reduced considerably
Resumo:
Establishing metrics to assess machine translation (MT) systems automatically is now crucial owing to the widespread use of MT over the web. In this study we show that such evaluation can be done by modeling text as complex networks. Specifically, we extend our previous work by employing additional metrics of complex networks, whose results were used as input for machine learning methods and allowed MT texts of distinct qualities to be distinguished. Also shown is that the node-to-node mapping between source and target texts (English-Portuguese and Spanish-Portuguese pairs) can be improved by adding further hierarchical levels for the metrics out-degree, in-degree, hierarchical common degree, cluster coefficient, inter-ring degree, intra-ring degree and convergence ratio. The results presented here amount to a proof-of-principle that the possible capturing of a wider context with the hierarchical levels may be combined with machine learning methods to yield an approach for assessing the quality of MT systems. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train
Resumo:
Traditional high speed machinery actuators are powered and coordinated by mechanical linkages driven from a central drive, but these linkages may be replaced by independently synchronised electric drives. Problems associated with utilising such electric drives for this form of machinery were investigated. The research concentrated on a high speed rod-making machine, which required control of high inertias (0.01-0.5kgm2), at continuous high speed (2500 r/min), with low relative phase errors between two drives (0.0025 radians). Traditional minimum energy drive selection techniques for incremental motions were not applicable to continuous applications which require negligible energy dissipation. New selection techniques were developed. A brushless configuration constant enabled the comparison between seven different servo systems; the rate earth brushless drives had the best power rates which is a performance measure. Simulation was used to review control strategies, such that a microprocessor controller with a proportional velocity loop within a proportional position loop with velocity feedforward was designed. Local control schemes were investigated as means of reducing relative errors between drives: the slave of a master/slave scheme compensates for the master's errors: the matched scheme has drives with similar absolute errors so the relative error is minimised, and the feedforward scheme minimises error by adding compensation from previous knowledge. Simulation gave an approximate velocity loop bandwidth and position loop gain required to meet the specification. Theoretical limits for these parameters were defined in terms of digital sampling delays, quantisation, and system phase shifts. Performance degradation due to mechanical backlash was evaluated. Thus any drive could be checked to ensure that the performance specification could be realised. A two drive demonstrator was commissioned with 0.01kgm2 loads. By use of simulation the performance of one drive was improved by increasing the velocity loop bandwidth fourfold. With the master/slave scheme relative errors were within 0.0024 radians at a constant 2500 r/min for two 0.01 kgm^2 loads.
Resumo:
In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.
Resumo:
Historically, domestic tasks such as preparing food and washing and drying clothes and dishes were done by hand. In a modern home many of these chores are taken care of by machines such as washing machines, dishwashers and tumble dryers. When the first such machines came on the market customers were happy that they worked at all! Today, the costs of electricity and customers’ environmental awareness are high, so features such as low electricity, water and detergent use strongly influence which household machine the customer will buy. One way to achieve lower electricity usage for the tumble dryer and the dishwasher is to add a heat pump system. The function of a heat pump system is to extract heat from a lower temperature source (heat source) and reject it to a higher temperature sink (heat sink) at a higher temperature level. Heat pump systems have been used for a long time in refrigerators and freezers, and that industry has driven the development of small, high quality, low price heat pump components. The low price of good quality heat pump components, along with an increased willingness to pay extra for lower electricity usage and environmental impact, make it possible to introduce heat pump systems in other household products. However, there is a high risk of failure with new features. A number of household manufacturers no longer exist because they introduced poorly implemented new features, which resulted in low quality and product performance. A manufacturer must predict whether the future value of a feature is high enough for the customer chain to pay for it. The challenge for the manufacturer is to develop and produce a high-performance heat pump feature in a household product with high quality, predict future willingness to pay for it, and launch it at the right moment in order to succeed. Tumble dryers with heat pump systems have been on the market since 2000. Paper I reports on the development of a transient simulation model of a commercial heat pump tumble dryer. The measured and simulated results were compared with good similarity. The influence of the size of the compressor and the condenser was investigated using the validated simulation model. The results from the simulation model show that increasing the cylinder volume of the compressor by 50% decreases the drying time by 14% without using more electricity. Paper II is a concept study of adding a heat pump system to a dishwasher in order to decrease the total electricity usage. The dishwasher, dishware and water are heated by the condenser, and the evaporator absorbs the heat from a water tank. The majority of the heat transfer to the evaporator occurs when ice is generated in the water tank. An experimental setup and a transient simulation model of a heat pump dishwasher were developed. The simulation results show a 24% reduction in electricity use compared to a conventional dishwasher heated with an electric element. The simulation model was based on an experimental setup that was not optimised. During the study it became apparent that it is possible to decrease electricity usage even more with the next experimental setup.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). METHODS: Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. RESULTS: Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). CONCLUSION: Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.
Resumo:
The aim of this study was to evaluate the viability of the use of spent laying hens' meat in the manufacturing of mortadella-type sausages with healthy appeal by using vegetable oil instead of animal fat. 120 Hy-line® layer hens were distributed in a completely randomized design into two treatments of six replicates with ten birds each. The treatments were birds from light Hy-line® W36 and semi-heavy Hy-line® Brown lines. Cold carcass, wing, breast and leg fillets yields were determined. Dry matter, protein, and lipid contents were determined in breast and leg fillets. The breast and legg fillets of three replicates per treatment were used to manufacture mortadella. After processing, sausages were evaluated for proximal composition, objective color, microbiological parameters, fatty acid profile and sensory acceptance. The meat of light and semi-heavy spent hens presented good yield and composition, allowing it to be used as raw material for the manufacture of processed products. Mortadellas were safe from microbiological point of view, and those made with semi-heavy hens fillets were redder and better accepted by consumers. Values for all sensory attributes were evaluated over score 5 (neither liked nor disliked). Both products presented high polyunsaturated fatty acid contents and good polyunsaturated to saturated fatty acid ratio. The excellent potential for the use of meat from spent layer hens of both varieties in the manufacturing of healthier mortadella-type sausage was demonstrated.
Resumo:
This work proposes a new approach using a committee machine of artificial neural networks to classify masses found in mammograms as benign or malignant. Three shape factors, three edge-sharpness measures, and 14 texture measures are used for the classification of 20 regions of interest (ROIs) related to malignant tumors and 37 ROIs related to benign masses. A group of multilayer perceptrons (MLPs) is employed as a committee machine of neural network classifiers. The classification results are reached by combining the responses of the individual classifiers. Experiments involving changes in the learning algorithm of the committee machine are conducted. The classification accuracy is evaluated using the area A. under the receiver operating characteristics (ROC) curve. The A, result for the committee machine is compared with the A, results obtained using MLPs and single-layer perceptrons (SLPs), as well as a linear discriminant analysis (LDA) classifier Tests are carried out using the student's t-distribution. The committee machine classifier outperforms the MLP SLP, and LDA classifiers in the following cases: with the shape measure of spiculation index, the A, values of the four methods are, in order 0.93, 0.84, 0.75, and 0.76; and with the edge-sharpness measure of acutance, the values are 0.79, 0.70, 0.69, and 0.74. Although the features with which improvement is obtained with the committee machines are not the same as those that provided the maximal value of A(z) (A(z) = 0.99 with some shape features, with or without the committee machine), they correspond to features that are not critically dependent on the accuracy of the boundaries of the masses, which is an important result. (c) 2008 SPIE and IS&T.
Resumo:
Objective To examine the ability of the criteria proposed by the WHO to identify pneumonia among cases presenting with wheezing and the extent to which adding fever to the criteria alters their performance. Design Prospective classification of 390 children aged 2-59 months with lower respiratory tract disease into five diagnostic categories, including pneumonia. WHO criteria for the identification of pneumonia and a set of such criteria modified by adding fever were compared with radio-graphically diagnosed pneumonia as the gold standard. Results The sensitivity of the WHO criteria was 94% for children aged <24 months and 62% for those aged >= 24 months. The corresponding specificities were 20% and 16%. Adding fever to the WHO criteria improved specificity substantially (to 44% and 50%, respectively). The specificity of the WHO criteria was poor for children with wheezing (12%). Adding fever improved this substantially (to 42%). The addition of fever to the criteria apparently reduced their sensitivity only marginally (to 92% and 57%, respectively, in the two age groups). Conclusion The authors' results reaffirm that the current WHO criteria can detect pneumonia with high sensitivity, particularly among younger children. They present evidence that the ability of these criteria to distinguish between children with pneumonia and those with wheezing diseases might be greatly enhanced by the addition of fever.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
Paper products show dimensional changes when subjected to moisture content modification. Hygroexpansivity was investigated in a commercial paper machine operating at 1256 m/min by a set of measurements on 75 g/m(2) reprographic bleached eucalyptus pulp paper samples. The present work shows hygroexpansivity development in different sections of the paper machine along the manufacturing direction. The measurement results demonstrate the effects of papermaking process operations on paper hygroexpansivity and lead to the confirmation of fiber orientation degree, drying restraint and shrinkage and paper tension as significant influencing factors. Structural, strength and elastic properties of paper were also measured as a function of machine direction position and presented for discussion purposes.
Resumo:
This paper addresses the minimization of the mean absolute deviation from a common due date in a two-machine flowshop scheduling problem. We present heuristics that use an algorithm, based on proposed properties, which obtains an optimal schedule fora given job sequence. A new set of benchmark problems is presented with the purpose of evaluating the heuristics. Computational experiments show that the developed heuristics outperform results found in the literature for problems up to 500 jobs. (C) 2007 Elsevier Ltd. All rights reserved.