955 resultados para Prediction techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concentration of organic acids in anaerobic digesters is one of the most critical parameters for monitoring and advanced control of anaerobic digestion processes. Thus, a reliable online-measurement system is absolutely necessary. A novel approach to obtaining these measurements indirectly and online using UV/vis spectroscopic probes, in conjunction with powerful pattern recognition methods, is presented in this paper. An UV/vis spectroscopic probe from S::CAN is used in combination with a custom-built dilution system to monitor the absorption of fully fermented sludge at a spectrum from 200 to 750 nm. Advanced pattern recognition methods are then used to map the non-linear relationship between measured absorption spectra to laboratory measurements of organic acid concentrations. Linear discriminant analysis, generalized discriminant analysis (GerDA), support vector machines (SVM), relevance vector machines, random forest and neural networks are investigated for this purpose and their performance compared. To validate the approach, online measurements have been taken at a full-scale 1.3-MW industrial biogas plant. Results show that whereas some of the methods considered do not yield satisfactory results, accurate prediction of organic acid concentration ranges can be obtained with both GerDA and SVM-based classifiers, with classification rates in excess of 87% achieved on test data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study was undertaken to examine a range of sample preparation and near infrared reflectance spectroscopy (NIPS) methodologies, using undried samples, for predicting organic matter digestibility (OMD g kg(-1)) and ad libitum intake (g kg(-1) W-0.75) of grass silages. A total of eight sample preparation/NIRS scanning methods were examined involving three extents of silage comminution, two liquid extracts and scanning via either external probe (1100-2200 nm) or internal cell (1100-2500 nm). The spectral data (log 1/R) for each of the eight methods were examined by three regression techniques each with a range of data transformations. The 136 silages used in the study were obtained from farms across Northern Ireland, over a two year period, and had in vivo OMD (sheep) and ad libitum intake (cattle) determined under uniform conditions. In the comparisons of the eight sample preparation/scanning methods, and the differing mathematical treatments of the spectral data, the sample population was divided into calibration (n = 91) and validation (n = 45) sets. The standard error of performance (SEP) on the validation set was used in comparisons of prediction accuracy. Across all 8 sample preparation/scanning methods, the modified partial least squares (MPLS) technique, generally minimized SEP's for both OMD and intake. The accuracy of prediction also increased with degree of comminution of the forage and with scanning by internal cell rather than external probe. The system providing the lowest SEP used the MPLS regression technique on spectra from the finely milled material scanned through the internal cell. This resulted in SEP and R-2 (variance accounted for in validation set) values of 24 (g/kg OM) and 0.88 (OMD) and 5.37 (g/kg W-0.75) and 0.77 (intake) respectively. These data indicate that with appropriate techniques NIRS scanning of undried samples of grass silage can produce predictions of intake and digestibility with accuracies similar to those achieved previously using NIRS with dried samples. (C) 1998 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The enzyme UDP-galactose 4'-epimerase (GALE) catalyses the reversible epimerisation of both UDP-galactose and UDP-N-acetyl-galactosamine. Deficiency of the human enzyme (hGALE) is associated with type III galactosemia. The majority of known mutations in hGALE are missense and private thus making clinical guidance difficult. In this study a bioinformatics approach was employed to analyse the structural effects due to each mutation using both the UDP-glucose and UDP-N-acetylglucosamine bound structures of the wild-type protein. Changes to the enzyme's overall stability, substrate/cofactor binding and propensity to aggregate were also predicted. These predictions were found to be in good agreement with previous in vitro and in vivo studies when data was available and allowed for the differentiation of those mutants that severely impair the enzyme's activity against UDP-galactose. Next this combination of techniques were applied to another twenty-six reported variants from the NCBI dbSNP database that have yet to be studied to predict their effects. This identified p.I14T, p.R184H and p.G302R as likely severely impairing mutations. Although severely impaired mutants were predicted to decrease the protein's stability, overall predicted stability changes only weakly correlated with residual activity against UDP-galactose. This suggests other protein functions such as changes in cofactor and substrate binding may also contribute to the mechanism of impairment. Finally this investigation shows that this combination of different in silico approaches is useful in predicting the effects of mutations and that it could be the basis of an initial prediction of likely clinical severity when new hGALE mutants are discovered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital manufacturing techniques can simulate complex assembly sequences using computer-aided design-based, as-designed' part forms, and their utility has been proven across several manufacturing sectors including the ship building, automotive and aerospace industries. However, the reality of working with actual parts and composite components, in particular, is that geometric variability arising from part forming or processing conditions can cause problems during assembly as the as-manufactured' form differs from the geometry used for any simulated build validation. In this work, a simulation strategy is presented for the study of the process-induced deformation behaviour of a 90 degrees, V-shaped angle. Test samples were thermoformed using pre-consolidated carbon fibre-reinforced polyphenylene sulphide, and the processing conditions were re-created in a virtual environment using the finite element method to determine finished component angles. A procedure was then developed for transferring predicted part forms from the finite element outputs to a digital manufacturing platform for the purpose of virtual assembly validation using more realistic part geometry. Ultimately, the outcomes from this work can be used to inform process condition choices, material configuration and tool design, so that the dimensional gap between as-designed' and as-manufactured' part forms can be reduced in the virtual environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chili powder is a globally traded commodity which has been found to be adulterated with Sudan dyes from 2003 onwards. In this study, chili powders were adulterated with varying quantities of Sudan I dye (0.1-5%) and spectra were generated using near infrared reflectance spectroscopy (NIRS) and Raman
spectroscopy (on a spectrometer with a sample compartment modified as part of the study). Chemometrics were applied to the spectral data to produce quantitative and qualitative calibration models and prediction statistics. For the quantitative models coefficients of determination (R2) were found to be
0.891-0.994 depending on which spectral data (NIRS/Raman) was processed, the mathematical algorithm used and the data pre-processing applied. The corresponding values for the root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were found to be 0.208-0.851%
and 0.141-0.831% respectively, once again depending on the spectral data and the chemometric treatment applied to the data. Indications are that the NIR spectroscopy based models are superior to the models produced from Raman spectral data based on a comparison of the values of the chemometric
parameters. The limit of detection (LOD) based on analysis of 20 blank chili powders against each calibration model gave 0.25% and 0.88% for the NIR and Raman data, respectively. In addition, adopting a qualitative approach with the spectral data and applying PCA or PLS-DA, it was possible to discriminate
between adulterated chili powders from non-adulterated chili powders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart management of maintenances has become fundamental in manufacturing environments in order to decrease downtime and costs associated with failures. Predictive Maintenance (PdM) systems based on Machine Learning (ML) techniques have the possibility with low added costs of drastically decrease failures-related expenses; given the increase of availability of data and capabilities of ML tools, PdM systems are becoming really popular, especially in semiconductor manufacturing. A PdM module based on Classification methods is presented here for the prediction of integral type faults that are related to machine usage and stress of equipment parts. The module has been applied to an important class of semiconductor processes, ion-implantation, for the prediction of ion-source tungsten filament breaks. The PdM has been tested on a real production dataset. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, 39 sets of hard turning (HT) experimental trials were performed on a Mori-Seiki SL-25Y (4-axis) computer numerical controlled (CNC) lathe to study the effect of cutting parameters in influencing the machined surface roughness. In all the trials, AISI 4340 steel workpiece (hardened up to 69 HRC) was machined with a commercially available CBN insert (Warren Tooling Limited, UK) under dry conditions. The surface topography of the machined samples was examined by using a white light interferometer and a reconfirmation of measurement was done using a Form Talysurf. The machining outcome was used as an input to develop various regression models to predict the average machined surface roughness on this material. Three regression models - Multiple regression, Random Forest, and Quantile regression were applied to the experimental outcomes. To the best of the authors’ knowledge, this paper is the first to apply Random Forest or Quantile regression techniques to the machining domain. The performance of these models was compared to each other to ascertain how feed, depth of cut, and spindle speed affect surface roughness and finally to obtain a mathematical equation correlating these variables. It was concluded that the random forest regression model is a superior choice over multiple regression models for prediction of surface roughness during machining of AISI 4340 steel (69 HRC).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-03

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major concerns of scoliosis patients undergoing surgical treatment is the aesthetic aspect of the surgery outcome. It would be useful to predict the postoperative appearance of the patient trunk in the course of a surgery planning process in order to take into account the expectations of the patient. In this paper, we propose to use least squares support vector regression for the prediction of the postoperative trunk 3D shape after spine surgery for adolescent idiopathic scoliosis. Five dimensionality reduction techniques used in conjunction with the support vector machine are compared. The methods are evaluated in terms of their accuracy, based on the leave-one-out cross-validation performed on a database of 141 cases. The results indicate that the 3D shape predictions using a dimensionality reduction obtained by simultaneous decomposition of the predictors and response variables have the best accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.