996 resultados para standard batch algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Verfahren der Lebensmitteltrocknung wird häufig angewendet, um ein Produkt für längere Zeit haltbar zu machen. Obst und Gemüse sind aufgrund ihres hohen Wassergehalts leicht verderblich durch biochemische Vorgänge innerhalb des Produktes, nicht sachgemäße Lagerung und unzureichende Transportmöglichkeiten. Um solche Verluste zu vermeiden wird die direkte Trocknung eingesetzt, welche die älteste Methode zum langfristigen haltbarmachen ist. Diese Methode ist jedoch veraltet und kann den heutigen Herausforderungen nicht gerecht werden. In der vorliegenden Arbeit wurde ein neuer Chargentrockner, mit diagonalem Luftstömungskanal entlang der Länge des Trocknungsraumes und ohne Leitbleche entwickelt. Neben dem unbestreitbaren Nutzen der Verwendung von Leitblechen, erhöhen diese jedoch die Konstruktionskosten und führen auch zu einer Erhöhung des Druckverlustes. Dadurch wird im Trocknungsprozess mehr Energie verbraucht. Um eine räumlich gleichmäßige Trocknung ohne Leitbleche zu erreichen, wurden die Lebensmittelbehälter diagonal entlang der Länge des Trockners platziert. Das vorrangige Ziel des diagonalen Kanals war, die einströmende, warme Luft gleichmäßig auf das gesamte Produkt auszurichten. Die Simulation des Luftstroms wurde mit ANSYS-Fluent in der ANSYS Workbench Plattform durchgeführt. Zwei verschiedene Geometrien der Trocknungskammer, diagonal und nicht diagonal, wurden modelliert und die Ergebnisse für eine gleichmäßige Luftverteilung aus dem diagonalen Luftströmungsdesign erhalten. Es wurde eine Reihe von Experimenten durchgeführt, um das Design zu bewerten. Kartoffelscheiben dienten als Trocknungsgut. Die statistischen Ergebnisse zeigen einen guten Korrelationskoeffizienten für die Luftstromverteilung (87,09%) zwischen dem durchschnittlich vorhergesagten und der durchschnittlichen gemessenen Strömungsgeschwindigkeit. Um den Effekt der gleichmäßigen Luftverteilung auf die Veränderung der Qualität zu bewerten, wurde die Farbe des Produktes, entlang der gesamten Länge der Trocknungskammer kontaktfrei im on-line-Verfahren bestimmt. Zu diesem Zweck wurde eine Imaging-Box, bestehend aus Kamera und Beleuchtung entwickelt. Räumliche Unterschiede dieses Qualitätsparameters wurden als Kriterium gewählt, um die gleichmäßige Trocknungsqualität in der Trocknungskammer zu bewerten. Entscheidend beim Lebensmittel-Chargentrockner ist sein Energieverbrauch. Dafür wurden thermodynamische Analysen des Trockners durchgeführt. Die Energieeffizienz des Systems wurde unter den gewählten Trocknungsbedingungen mit 50,16% kalkuliert. Die durchschnittlich genutzten Energie in Form von Elektrizität zur Herstellung von 1kg getrockneter Kartoffeln wurde mit weniger als 16,24 MJ/kg und weniger als 4,78 MJ/kg Wasser zum verdampfen bei einer sehr hohen Temperatur von jeweils 65°C und Scheibendicken von 5mm kalkuliert. Die Energie- und Exergieanalysen für diagonale Chargentrockner wurden zudem mit denen anderer Chargentrockner verglichen. Die Auswahl von Trocknungstemperatur, Massenflussrate der Trocknungsluft, Trocknerkapazität und Heiztyp sind die wichtigen Parameter zur Bewertung der genutzten Energie von Chargentrocknern. Die Entwicklung des diagonalen Chargentrockners ist eine nützliche und effektive Möglichkeit um dei Trocknungshomogenität zu erhöhen. Das Design erlaubt es, das gesamte Produkt in der Trocknungskammer gleichmäßigen Luftverhältnissen auszusetzen, statt die Luft von einer Horde zur nächsten zu leiten.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendritic cell algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a ‘context aware’ detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a self-organizing map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anaerobic digestion (AD) of wastewater is a very interesting option for waste valorization, energy production and environment protection. It is a complex, naturally occurring process that can take place inside bioreactors. The capability of predicting the operation of such bioreactors is important to optimize the design and the operation conditions of the reactors, which, in part, justifies the numerous AD models presently available. The existing AD models are not universal, have to be inferred from prior knowledge and rely on existing experimental data. Among the tasks involved in the process of developing a dynamical model for AD, the estimation of parameters is one of the most challenging. This paper presents the identifiability analysis of a nonlinear dynamical model for a batch reactor. Particular attention is given to the structural identifiability of the model, which considers the uniqueness of the estimated parameters. To perform this analysis, the GenSSI toolbox was used. The estimation of the model parameters is achieved with genetic algorithms (GA) which have already been used in the context of AD modelling, although not commonly. The paper discusses its advantages and disadvantages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to study the application of Genetic Algorithms in anaerobic digestion modeling, in particular when using dynamical models. Along the work, different types of bioreactors are shown, such as batch, semi-batch and continuous, as well as their mathematical modeling. The work intendeds to estimate the parameter values of two biological reaction model. For that, simulated results, where only one output variable, the produced biogas, is known, are fitted to the model results. For this reason, the problems associated with reverse optimization are studied, using some graphics that provide clues to the sensitivity and identifiability associated with the problem. Particular solutions obtained by the identifiability analysis using GENSSI and DAISY softwares are also presented. Finally, the optimization is performed using genetic algorithms. During this optimization the need to improve the convergence of genetic algorithms was felt. This need has led to the development of an adaptation of the genetic algorithms, which we called Neighbored Genetic Algorithms (NGA1 and NGA2). In order to understand if this new approach overcomes the Basic Genetic Algorithms (BGA) and achieves the proposed goals, a study of 100 full optimization runs for each situation was further developed. Results show that NGA1 and NGA2 are statistically better than BGA. However, because it was not possible to obtain consistent results, the Nealder-Mead method was used, where the initial guesses were the estimated results from GA; Algoritmos Evolucionários para a Modelação de Bioreactores Resumo: Neste trabalho procura-se estudar os algoritmos genéticos com aplicação na modelação da digestão anaeróbia e, em particular, quando se utilizam modelos dinâmicos. Ao longo do mesmo, são apresentados diferentes tipos de bioreactores, como os batch, semi-batch e contínuos, bem como a modelação matemática dos mesmos. Neste trabalho procurou-se estimar o valor dos parâmetros que constam num modelo de digestão anaeróbia para o ajustar a uma situação simulada onde apenas se conhece uma variável de output, o biogas produzido. São ainda estudados os problemas associados à optimização inversa com recurso a alguns gráficos que fornecem pistas sobre a sensibilidade e identifiacabilidade associadas ao problema da modelação da digestão anaeróbia. São ainda apresentadas soluções particulares de idenficabilidade obtidas através dos softwares GENSSI e DAISY. Finalmente é realizada a optimização do modelo com recurso aos algoritmos genéticos. No decorrer dessa optimização sentiu-se a necessidade de melhorar a convergência e, portanto, desenvolveu-se ainda uma adaptação dos algoritmos genéticos a que se deu o nome de Neighboured Genetic Algorithms (NGA1 e NGA2). No sentido de se compreender se as adaptações permitiam superar os algoritmos genéticos básicos e atingir as metas propostas, foi ainda desenvolvido um estudo em que o processo de optimização foi realizado 100 vezes para cada um dos métodos, o que permitiu concluir, estatisticamente, que os BGA foram superados pelos NGA1 e NGA2. Ainda assim, porque não foi possivel obter consistência nos resultados, foi usado o método de Nealder-Mead utilizado como estimativa inicial os resultados obtidos pelos algoritmos genéticos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of vibration qualification testing, with the popular Random Control mode of shakers, the specimen is excited by random vibrations typically set in the form of a Power Spectral Density (PSD). The corresponding signals are stationary and Gaussian, i.e. featuring a normal distribution. Conversely, real-life excitations are frequently non-Gaussian, exhibiting high peaks and/or burst signals and/or deterministic harmonic components. The so-called kurtosis is a parameter often used to statistically describe the occurrence and significance of high peak values in a random process. Since the similarity between test input profiles and real-life excitations is fundamental for qualification test reliability, some methods of kurtosis-control can be implemented to synthesize realistic (non-Gaussian) input signals. Durability tests are performed to check the resistance of a component to vibration-based fatigue damage. A procedure to synthesize test excitations which starts from measured data and preserves both the damage potential and the characteristics of the reference signals is desirable. The Fatigue Damage Spectrum (FDS) is generally used to quantify the fatigue damage potential associated with the excitation. The signal synthesized for accelerated durability tests (i.e. with a limited duration) must feature the same FDS as the reference vibration computed for the component’s expected lifetime. Current standard procedures are efficient in synthesizing signals in the form of a PSD, but prove inaccurate if reference data are non-Gaussian. This work presents novel algorithms for the synthesis of accelerated durability test profiles with prescribed FDS and a non-Gaussian distribution. An experimental campaign is conducted to validate the algorithms, by testing their accuracy, robustness, and practical effectiveness. Moreover, an original procedure is proposed for the estimation of the fatigue damage potential, aiming to minimize the computational time. The research is thus supposed to improve both the effectiveness and the efficiency of excitation profile synthesis for accelerated durability tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High dose rate brachytherapy (HDR) using 192Ir sources is well accepted as an important treatment option and thus requires an accurate dosimetry standard. However, a dosimetry standard for the direct measurement of the absolute dose to water for this particular source type is currently not available. An improved standard for the absorbed dose to water based on Fricke dosimetry of HDR 192Ir brachytherapy sources is presented in this study. The main goal of this paper is to demonstrate the potential usefulness of the Fricke dosimetry technique for the standardization of the quantity absorbed dose to water for 192Ir sources. A molded, double-walled, spherical vessel for water containing the Fricke solution was constructed based on the Fricke system. The authors measured the absorbed dose to water and compared it with the doses calculated using the AAPM TG-43 report. The overall combined uncertainty associated with the measurements using Fricke dosimetry was 1.4% for k = 1, which is better than the uncertainties reported in previous studies. These results are promising; hence, the use of Fricke dosimetry to measure the absorbed dose to water as a standard for HDR 192Ir may be possible in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). METHODS: Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. RESULTS: Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). CONCLUSION: Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study proposes a simplified mathematical model to describe the processes occurring in an anaerobic sequencing batch biofilm reactor (ASBBR) treating lipid-rich wastewater. The reactor, subjected to rising organic loading rates, contained biomass immobilized cubic polyurethane foam matrices, and was operated at 32 degrees C +/- 2 degrees C, using 24-h batch cycles. In the adaptation period, the reactor was fed with synthetic substrate for 46 days and was operated without agitation. Whereas agitation was raised to 500 rpm, the organic loading rate (OLR) rose from 0.3 g chemical oxygen demand (COD) . L(-1) . day(-1) to 1.2 g COD . L(-1) . day(-1). The ASBBR was fed fat-rich wastewater (dairy wastewater), in an operation period lasting for 116 days, during which four operational conditions (OCs) were tested: 1.1 +/- 0.2 g COD . L(-1) . day(-1) (OC1), 4.5 +/- 0.4 g COD . L(-1) . day(-1) (OC2), 8.0 +/- 0.8 g COD . L(-1) . day(-1) (OC3), and 12.1 +/- 2.4 g COD . L(-1) . day(-1) (OC4). The bicarbonate alkalinity (BA)/COD supplementation ratio was 1:1 at OC1, 1:2 at OC2, and 1:3 at OC3 and OC4. Total COD removal efficiencies were higher than 90%, with a constant production of bicarbonate alkalinity, in all OCs tested. After the process reached stability, temporal profiles of substrate consumption were obtained. Based on these experimental data a simplified first-order model was fit, making possible the inference of kinetic parameters. A simplified mathematical model correlating soluble COD with volatile fatty acids (VFA) was also proposed, and through it the consumption rates of intermediate products as propionic and acetic acid were inferred. Results showed that the microbial consortium worked properly and high efficiencies were obtained, even with high initial substrate concentrations, which led to the accumulation of intermediate metabolites and caused low specific consumption rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.