839 resultados para Polynomial Classifier
Resumo:
In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The aim of the present study was to develop a classifier able to discriminate between healthy controls and dyspeptic patients by analysis of their electrogastrograms. Fifty-six electrogastrograms were analyzed, corresponding to 42 dyspeptic patients and 14 healthy controls. The original signals were subsampled, filtered and divided into the pre-, post-, and prandial stages. A time-frequency transformation based on wavelets was used to extract the signal characteristics, and a special selection procedure based on correlation was used to reduce their number. The analysis was carried out by evaluating different neural network structures to classify the wavelet coefficients into two groups (healthy subjects and dyspeptic patients). The optimization process of the classifier led to a linear model. A dimension reduction that resulted in only 25% of uncorrelated electrogastrogram characteristics gave 24 inputs for the classifier. The prandial stage gave the most significant results. Under these conditions, the classifier achieved 78.6% sensitivity, 92.9% specificity, and an error of 17.9 ± 6% (with a 95% confidence level). These data show that it is possible to establish significant differences between patients and normal controls when time-frequency characteristics are extracted from an electrogastrogram, with an adequate component reduction, outperforming the results obtained with classical Fourier analysis. These findings can contribute to increasing our understanding of the pathophysiological mechanisms involved in functional dyspepsia and perhaps to improving the pharmacological treatment of functional dyspeptic patients.
Antioxidant activity of rosemary and oregano ethanol extracts in soybean oil under thermal oxidation
Resumo:
Four experiments were conducted to measure the antioxidant activity of ethanol extracts of rosemary and oregano compared with synthetic antioxidants such as TBHQ and BHA/BHT. The antioxidant activity was determined and results differed from those of the Oven test at 63º C. Peroxide values and absorptivities at 232 nm of soybean oil under Oven test were lower in treatments with 25, 50, 75, 100 and 200 mg.Kg-1 TBHQ than in treatments with 1000 mg.Kg-1 oregano extract (O), 500 mg.Kg-1 rosemary extract (R) and their mixture R+O. All the treatments were effective in controlling the thermal oxidation of oils; the natural extracts were as effective as BHA+BHT and less effective than TBHQ. The natural extracts were mixed with 25, 50, 75 and 100 mg.Kg-1 TBHQ and then added to the oil. No improvement in antioxidative properties was observed. The best antioxidant concentration could be determined from polynomial regression and quadratic equation from the experimental data.
Resumo:
The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.
Resumo:
Tässä työssä esitetään venäläisen matemaatikon A.I. Shirshovin teorioita ja tuloksia sanojen kombinatoriikasta. Lisäksi näytetään miten ne soveltuvat PI-algebrojen maailmaan. Shirshovin tuloksia tarkasteltaessa käsitellään sanoja erillisinä kombinatorisina objekteina ja todistetaan Shirshovin Lemma, joka on tämän työn perusta. Lemmanmukaan tarpeeksi pitkille sanoille saadaan tiettyä säännönmukaisuutta ja se todistetaan kolme kertaa. Ensimmäisestä saadaan tarpeeksi pitkän sanan olemassaolo.Toinen todistus mukailee Shirshovin alkuperäistä todistusta. Kolmannessa todistuksessa annetaan tarpeeksi pitkälle sanalle paremmin käytäntöön soveltuva raja. Tämän jälkeen käsitellään sanoja algebrallisina objekteina. Työn päätuloksena todistetaan Shirshovin Korkeuslause, jonka mukaan jokainen äärellisesti generoidunPI-algebran alkio on sanojen ω1k1 ···ωdkd lineaarikombinaatio, missä sanojen ωi pi-tuudet sekä indeksi i ovat rajatut. Shirshovin Korkeuslauseesta seuraa suoraan positiivinen ratkaisu Kurochin ongelmaan PI-algebroilla sekä saadaan raja alkioiden lukumäärälle, jolla algebra generoituu moduliksi. Lisäksi esitetään toisena sovelluksena ilman todistuksia Shirshovin soveltuvuus Jacobsonin radikaalin nilpotenttisuuteen. Pääsääntöisenä lähteenä käytetään A. Kanel-Belowin ja L. H. Rowenin kirjaa: Computational aspects of polynomial identities.
Resumo:
Rheological and thermophysical properties were determined for blackberry juice, which was produced from blackberry fruit at 9.1 ± 0.5 °Brix and density of 1.0334 ± 0.0043 g cm-3. The concentration process was performed using a roto evaporator, under vacuum, to obtain concentrated juice at about 60 °Brix. In order to obtain different concentrations, concentrated juice was diluted with distilled water. Rheological measurements were carried out using a Rheotest 2.1 Searle type rheometer. In the tested ranges, the samples behaved as pseudoplastic fluids, and the Power-Law model was satisfactorily fitted to the experimental data. The friction factor was measured for blackberry juice in laminar flow conditions of pseudoplastic behavior. Thermal conductivity, thermal diffusivity and density of blackberry juice at 9.4 to 58.4 °Brix were determined, in triplicate, at 0.5 to 80.8 °C. Polynomial regression was performed to fit experimental data obtaining a good fit. Both temperature and concentration showed a strong influence on thermophysical properties of blackberry juice. Calculated apparent specific heat values varied from 2.416 to 4.300 kJ.kg-1 °C in the studied interval.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
The aim of this work was to make tofu from soybean cultivar BRS 267 under different processing conditions in order to evaluate the influence of each treatment on the product quality. A fractional factorial 2(5-1) design was used, in which independent variables (thermal treatment, coagulant concentration, coagulation time, curd cutting, and draining time) were tested at two different levels. The response variables studied were hardness, yield, total solids, and protein content of tofu. Polynomial models were generated for each response. To obtain tofu with desirable characteristics (hardness ~4 N, yield 306 g tofu.100 g-1 soybeans, 12 g proteins.100 g-1 tofu and 22 g solids.100 g-1 tofu), the following processing conditions were selected: heating until boiling plus 10 minutes in water bath, 2% dihydrated CaSO4 w/w, 10 minutes coagulation, curd cutting, and 30 minutes draining time.
Resumo:
This study aims to optimize an alternative method of extraction of carrageenan without previous alkaline treatment and ethanol precipitation using Response Surface Methodology (RSM). In order to introduce an innovation in the isolation step, atomization drying was used reducing the time for obtaining dry carrageenan powder. The effects of extraction time and temperature on yield, gel strength, and viscosity were evaluated. Furthermore, the extracted material was submitted to structural analysis, by infrared spectroscopy and nuclear magnetic resonance spectroscopy (¹H-NMR), and chemical composition analysis. Results showed that the generated regression models adequately explained the data variation. Carrageenan yield and gel viscosity were influenced only by the extraction temperature. However, gel strength was influenced by both, extraction time and extraction temperature. Optimal extraction conditions were 74 ºC and 4 hours. In these conditions, the carrageenan extract properties determined by the polynomial model were 31.17%, 158.27 g.cm-2, and 29.5 cP for yield, gel strength, and viscosity, respectively, while under the experimental conditions they were 35.8 ± 4.68%, 112.50 ± 4.96 g.cm-2, and 16.01 ± 1.03 cP, respectively. The chemical composition, nuclear magnetic resonance spectroscopy, and infrared spectroscopy analyses showed that the crude carrageenan extracted is composed mainly of κ-carrageenan.
Resumo:
Abstract Millets are having superior nutritional qualities and health benefits; hence they can be used for supplementation of pasta. Pasta was prepared using composite flour (CF) of durum wheat semolina (96%) and carrot pomace (4%) supplemented with finger millet flour (FMF, 0-20g), pearl millet flour (PMF, 0-30g) and carboxy methyl cellulose (CMC, 2-4g). Second order polynomial described the effect of FMF, PMF and CMC on lightness, firmness, gruel loss and overall acceptability of extruded pasta products. Results indicate that an increasing proportion of finger and pearl millet flour had signed (p≤0.05) negative effect on lightness, firmness, gruel loss and overall acceptability. However, CMC addition showed significant (p≤0. 05) positive effect on firmness, overall acceptability and negative effect on gruel loss of cooked pasta samples. Numeric optimization results showed that optimum values for extruded pasta were 20g FMF, 12g PMF and 4g CMC per 100g of CF and 34ml water with 0.981 desirability. The pasta developed is nutritionally rich as it contains protein (10.16g), fat (6g), dietary fiber (16.71g), calcium (4.23mg), iron (3.99mg) and zinc (1.682mg) per 100g.
Resumo:
Abstract The aim of this work was to evaluate a non-agitated process of bioethanol production from soybean molasses and the kinetic parameters of fermentation using a strain of Saccharomyces cerevisiae (ATCC® 2345). Kinetic experiment was conducted in medium with 30% (w v-1) of soluble solids without supplementation or pH adjustment. The maximum ethanol concentration was in 44 hours, the ethanol productivity was 0.946 g L-1 h-1, the yield over total initial sugars (Y1) was 47.87%, over consumed sugars (Y2) was 88.08% and specific cells production rate was 0.006 h-1. The mathematical polynomial was adjusted to the experimental data and provided very similar parameters of yield and productivity. Based in this study, for one ton of soybean molasses can be produced 103 kg of anhydrous bioethanol.