44 resultados para ORDER ACCURACY APPROXIMATIONS
Resumo:
The nutritional quality of the product as well as other quality attributes like microbiological and sensory quality are essential factors in baby food industry, and therefore different alternative sterilizing methods for conventional heating processes are of great interest in this food sector. This report gives an overview on different sterilization techniques for baby food. The report is a part of the work done in work package 3 ”QACCP Analysis Processing: Quality – driven distribution and processing chain analysis“ in the Core Organic ERANET project called Quality analysis of critical control points within the whole food chain and their impact on food quality, safety and health (QACCP). The overall objective of the project is to optimise organic production and processing in order to improve food safety as well as nutritional quality and increase health promoting aspects in consumer products. The approach will be a chain analysis approach which addresses the link between farm and fork and backwards from fork to farm. The objective is to improve product related quality management in farming (towards testing food authenticity) and processing (towards food authenticity and sustainable processes. The articles in this volume do not necessarily reflect the Core Organic ERANET’s views and in no way anticipate the Core Organic ERANET’s future policy in this area. The contents of the articles in this volume are the sole responsibility of the authors. The information contained here in, including any expression of opinion and any projection or forecast, has been obtained from sources believed by the authors to be reliable but is not guaranteed as to accuracy or completeness. The information is supplied without obligation and on the understanding that any person who acts upon it or otherwise changes his/her position in reliance thereon does so entirely at his/her own risk. The writers gratefully acknowledge the financial support from the Core Organic Funding Body: Ministry of Agriculture and Forestry, Finland, Swiss Federal Office for Agriculture, Switzerland and Federal Ministry of Consumer Protection, Food and Agriculture, Germany.
Resumo:
The Lucianic text of the Septuagint of the Historical Books witnessed primarily by the manuscript group L (19, 82, 93, 108, and 127) consists of at least two strata: the recensional elements, which date back to about 300 C.E., and the substratum under these recensional elements, the proto-Lucianic text. Some distinctive readings in L seem to be supported by witnesses that antedate the supposed time of the recension. These witnesses include the biblical quotations of Josephus, Hippolytus, Irenaeus, Tertullian, and Cyprian, and the Old Latin translation of the Septuagint. It has also been posited that some Lucianic readings might go back to Hebrew readings that are not found in the Masoretic text but appear in the Qumran biblical texts. This phenomenon constitutes the proto-Lucianic problem. In chapter 1 the proto-Lucianic problem and its research history are introduced. Josephus references to 1 Samuel are analyzed in chapter 2. His agreements with L are few and are mostly only apparent or, at best, coincidental. In chapters 3 6 the quotations by four early Church Fathers are analyzed. Hippolytus Septuagint text is extremely hard to establish since his quotations from 1 Samuel have only been preserved in Armenian and Georgian translations. Most of the suggested agreements between Hippolytus and L are only apparent or coincidental. Irenaeus is the most trustworthy textual witness of the four early Church Fathers. His quotations from 1 Samuel agree with L several times against codex Vaticanus (B) and all or most of the other witnesses in preserving the original text. Tertullian and Cyprian agree with L in attesting some Hebraizing approximations that do not seem to be of Hexaplaric origin. The question is more likely of early Hebraizing readings of the same tradition as the kaige recension. In chapter 7 it is noted that Origen, although a pre-Lucianic Father, does not qualify as a proto-Lucianic witness. General observations about the Old Latin witnesses as well as an analysis of the manuscript La115 are given in chapter 8. In chapter 9 the theory of the proto-Lucianic recension is discussed. In order to demonstrate the existence of the proto-Lucianic recension one should find instances of indisputable agreement between the Qumran biblical manuscripts and L in readings that are secondary in Greek. No such case can be found in the Qumran material in 1 Samuel. In the text-historical conclusions (chapter 10) it is noted that of all the suggested proto-Lucianic agreements in 1 Samuel (about 75 plus 70 in La115) more than half are only apparent or, at best, coincidental. Of the indisputable agreements, however, 26 are agreements in the original reading. In about 20 instances the agreement is in a secondary reading. These agreements are early variants; mostly minor changes that happen all the time in the course of transmission. Four of the agreements, however, are in a pre-Hexaplaric Hebraizing approximation that has found its way independently into the pre-Lucianic witnesses and the Lucianic recension. The study aims at demonstrating the value of the Lucianic text as a textual witness: under the recensional layer(s) there is an ancient text that preserves very old, even original readings which have not been preserved in B and most of the other witnesses. The study also confirms the value of the early Church Fathers as textual witnesses.
Resumo:
This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.
Resumo:
The overlapping sound pressure waves that enter our brain via the ears and auditory nerves must be organized into a coherent percept. Modelling the regularities of the auditory environment and detecting unexpected changes in these regularities, even in the absence of attention, is a necessary prerequisite for orientating towards significant information as well as speech perception and communication, for instance. The processing of auditory information, in particular the detection of changes in the regularities of the auditory input, gives rise to neural activity in the brain that is seen as a mismatch negativity (MMN) response of the event-related potential (ERP) recorded by electroencephalography (EEG). --- As the recording of MMN requires neither a subject s behavioural response nor attention towards the sounds, it can be done even with subjects with problems in communicating or difficulties in performing a discrimination task, for example, from aphasic and comatose patients, newborns, and even fetuses. Thus with MMN one can follow the evolution of central auditory processing from the very early, often critical stages of development, and also in subjects who cannot be examined with the more traditional behavioural measures of auditory discrimination. Indeed, recent studies show that central auditory processing, as indicated by MMN, is affected in different clinical populations, such as schizophrenics, as well as during normal aging and abnormal childhood development. Moreover, the processing of auditory information can be selectively impaired for certain auditory attributes (e.g., sound duration, frequency) and can also depend on the context of the sound changes (e.g., speech or non-speech). Although its advantages over behavioral measures are undeniable, a major obstacle to the larger-scale routine use of the MMN method, especially in clinical settings, is the relatively long duration of its measurement. Typically, approximately 15 minutes of recording time is needed for measuring the MMN for a single auditory attribute. Recording a complete central auditory processing profile consisting of several auditory attributes would thus require from one hour to several hours. In this research, I have contributed to the development of new fast multi-attribute MMN recording paradigms in which several types and magnitudes of sound changes are presented in both speech and non-speech contexts in order to obtain a comprehensive profile of auditory sensory memory and discrimination accuracy in a short measurement time (altogether approximately 15 min for 5 auditory attributes). The speed of the paradigms makes them highly attractive for clinical research, their reliability brings fidelity to longitudinal studies, and the language context is especially suitable for studies on language impairments such as dyslexia and aphasia. In addition I have presented an even more ecological paradigm, and more importantly, an interesting result in view of the theory of MMN where the MMN responses are recorded entirely without a repetitive standard tone. All in all, these paradigms contribute to the development of the theory of auditory perception, and increase the feasibility of MMN recordings in both basic and clinical research. Moreover, they have already proven useful in studying for instance dyslexia, Asperger syndrome and schizophrenia.
Resumo:
Aim of this master's thesis paper for consumer economics, is to research gambling advertisements in Finland over a period of 35 years, from 1970 to 2006. Veikkaus Oy (later Veikkaus), was founded in 1940, as one of the three licensed gambling organizations in Finland. Material for the current research comprised 1494 advertisements published by Veikkaus in newspapers and magazines at that time. Veikkaus has the exclusive licence to organize lotto games, sport games, instant games and other draw games in Finland. The other two operators, The Finnish Slot Machine Association RAY and Fintoto (on-track horse betting), were not included in the current analysis. This study has been completed according to research contract and grand by the Finnish Foundation for Gaming Research (Pelitoiminnan tutkimussäätiö). In general, advertisements reflect surrounding culture and time, and their message is built on stratified meanings, symbols and codes. Advertising draws the viewer's attention, introduces the advertised subject, and finally, affects the individual's consumption habits. However, advertisements not only work on individual level, but also influence public perception of the advertised product. Firstly, in order to assess gambling as a phenomenon, this paper discusses gambling as consumer behaviour, and also reviews history of gambling in Finland. Winning is a major feature of gambling, and dreaming about positive change of life is a centre of most gambling ads. However, perceived excitement through risk of losing can also be featured in gambling ads. Secondly, this study utilizes Veikkaus’ large advertising archives, were advertising data is analyzed by content analysis and the semiotic analysis. Two methods have been employed to support analyzing outcome in a synergistic way. Content analysis helps to achieve accuracy and comprehensiveness. Semiotic analysis allows deeper and more sensitive analysis to emerged findings and occurrences. It is important to understand the advertised product, as advertising is bound to the culture and time. Hence, to analyze advertising, it is important to understand the environment where the ads appear. Content analysis of Veikkaus data discovered the main gambling and principal advertisement style for each.period. Interestingly, nearly half of Veikkaus’ advertisements promoted topic other than “just winning the bet”. Games of change, like Lotto, typically advertised indirectly represented dreams about winning. In the category of skill gambling, features were represented as investment, and the excitement of sporting expertise was emphasized. In addition, there were a number of gambling ads that emphasize social responsibility of Veikkaus as a government guided organization. Semiotic methods were employed to further elaborate on findings of content analysis. Dreaming in the advertisements was represented by the product of symbols, (e.g. cars and homes) that were found to have significance connection with each other. Thus, advertising represents change of life obtained by the winning. Interestingly, gambling ads promoting jackpots were often representing religious symbolisms. Ads promoting social responsibility were found to be the most common during economical depression of the 90’s. Deeper analysis showed that at that time, advertisements frequently represented depression-related meanings, such as unemployment and bank loans. Skill gaming ads were often represented by sports expertise – late 90’s, their number started sky rocketing, and continued increasing until 2006 (when this study ended). One may conclude that sport betting draws its meanings from the relevant consumer culture, and from the rules and features of the betted sport.
Resumo:
Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.
Resumo:
The factors affecting the non-industrial, private forest landowners' (hereafter referred to using the acronym NIPF) strategic decisions in management planning are studied. A genetic algorithm is used to induce a set of rules predicting potential cut of the landowners' choices of preferred timber management strategies. The rules are based on variables describing the characteristics of the landowners and their forest holdings. The predictive ability of a genetic algorithm is compared to linear regression analysis using identical data sets. The data are cross-validated seven times applying both genetic algorithm and regression analyses in order to examine the data-sensitivity and robustness of the generated models. The optimal rule set derived from genetic algorithm analyses included the following variables: mean initial volume, landowner's positive price expectations for the next eight years, landowner being classified as farmer, and preference for the recreational use of forest property. When tested with previously unseen test data, the optimal rule set resulted in a relative root mean square error of 0.40. In the regression analyses, the optimal regression equation consisted of the following variables: mean initial volume, proportion of forestry income, intention to cut extensively in future, and positive price expectations for the next two years. The R2 of the optimal regression equation was 0.34 and the relative root mean square error obtained from the test data was 0.38. In both models, mean initial volume and positive stumpage price expectations were entered as significant predictors of potential cut of preferred timber management strategy. When tested with the complete data set of 201 observations, both the optimal rule set and the optimal regression model achieved the same level of accuracy.
Resumo:
To enhance the utilization of the wood, the sawmills are forced to place more emphasis on planning to master the whole production chain from the forest to the end product. One significant obstacle to integrating the forest-sawmill-market production chain is the lack of appropriate information about forest stands. Since the wood procurement point of view in forest planning systems has been almost totally disregarded there has been a great need to develop an easy and efficient pre-harvest measurement method, allowing separate measurement of stands prior to harvesting. The main purpose of this study was to develop a measurement method for pine stands which forest managers could use in describing the properties of the standing trees for sawing production planning. Study materials were collected from ten Scots pine stands (Pinus sylvestris) located in North Häme and South Pohjanmaa, in southern Finland. The data comprise test sawing data on 314 pine stems, dbh and height measures of all trees and measures of the quality parameters of pine sawlog stems in all ten study stands as well as the locations of all trees in six stands. The study was divided into four sub-studies which deal with pine quality prediction, construction of diameter and dead branch height distributions, sampling designs and applying height and crown height models. The final proposal for the pre-harvest measurement method is a synthesis of the individual sub-studies. Quality analysis resulted in choosing dbh, distance from stump height to the first dead branch (dead branch height), crown height and tree height as the most appropriate quality characteristics of Scots pine. Dbh and dead branch height are measured from each pine sample tree while height and crown height are derived from dbh measures by aid of mixed height and crown height models. Pine and spruce diameter distribution as well as dead branch height distribution are most effectively predicted by the kernel function. Roughly 25 sample trees seems to be appropriate in pure pine stands. In mixed stands the number of sample trees needs to be increased in proportion to the intensity of pines in order to attain the same level of accuracy.
Resumo:
Hamiltonian systems in stellar and planetary dynamics are typically near integrable. For example, Solar System planets are almost in two-body orbits, and in simulations of the Galaxy, the orbits of stars seem regular. For such systems, sophisticated numerical methods can be developed through integrable approximations. Following this theme, we discuss three distinct problems. We start by considering numerical integration techniques for planetary systems. Perturbation methods (that utilize the integrability of the two-body motion) are preferred over conventional "blind" integration schemes. We introduce perturbation methods formulated with Cartesian variables. In our numerical comparisons, these are superior to their conventional counterparts, but, by definition, lack the energy-preserving properties of symplectic integrators. However, they are exceptionally well suited for relatively short-term integrations in which moderately high positional accuracy is required. The next exercise falls into the category of stability questions in solar systems. Traditionally, the interest has been on the orbital stability of planets, which have been quantified, e.g., by Liapunov exponents. We offer a complementary aspect by considering the protective effect that massive gas giants, like Jupiter, can offer to Earth-like planets inside the habitable zone of a planetary system. Our method produces a single quantity, called the escape rate, which characterizes the system of giant planets. We obtain some interesting results by computing escape rates for the Solar System. Galaxy modelling is our third and final topic. Because of the sheer number of stars (about 10^11 in Milky Way) galaxies are often modelled as smooth potentials hosting distributions of stars. Unfortunately, only a handful of suitable potentials are integrable (harmonic oscillator, isochrone and Stäckel potential). This severely limits the possibilities of finding an integrable approximation for an observed galaxy. A solution to this problem is torus construction; a method for numerically creating a foliation of invariant phase-space tori corresponding to a given target Hamiltonian. Canonically, the invariant tori are constructed by deforming the tori of some existing integrable toy Hamiltonian. Our contribution is to demonstrate how this can be accomplished by using a Stäckel toy Hamiltonian in ellipsoidal coordinates.
Resumo:
This thesis presents ab initio studies of two kinds of physical systems, quantum dots and bosons, using two program packages of which the bosonic one has mainly been developed by the author. The implemented models, \emph{i.e.}, configuration interaction (CI) and coupled cluster (CC) take the correlated motion of the particles into account, and provide a hierarchy of computational schemes, on top of which the exact solution, within the limit of the single-particle basis set, is obtained. The theory underlying the models is presented in some detail, in order to provide insight into the approximations made and the circumstances under which they hold. Some of the computational methods are also highlighted. In the final sections the results are summarized. The CI and CC calculations on multiexciton complexes in self-assembled semiconductor quantum dots are presented and compared, along with radiative and non-radiative transition rates. Full CI calculations on quantum rings and double quantum rings are also presented. In the latter case, experimental and theoretical results from the literature are re-examined and an alternative explanation for the reported photoluminescence spectra is found. The boson program is first applied on a fictitious model system consisting of bosonic electrons in a central Coulomb field for which CI at the singles and doubles level is found to account for almost all of the correlation energy. Finally, the boson program is employed to study Bose-Einstein condensates confined in different anisotropic trap potentials. The effects of the anisotropy on the relative correlation energy is examined, as well as the effect of varying the interaction potential.}
Resumo:
People with coeliac disease have to maintain a gluten-free diet, which means excluding wheat, barley and rye prolamin proteins from their diet. Immunochemical methods are used to analyse the harmful proteins and to control the purity of gluten-free foods. In this thesis, the behaviour of prolamins in immunological gluten assays and with different prolamin-specific antibodies was examined. The immunoassays were also used to detect residual rye prolamins in sourdough systems after enzymatic hydrolysis and wheat prolamins after deamidation. The aim was to characterize the ability of the gluten analysis assays to quantify different prolamins in varying matrices in order to improve the accuracy of the assays. Prolamin groups of cereals consist of a complex mixture of proteins that vary in their size and amino acid sequences. Two common characteristics distinguish prolamins from other cereal proteins. Firstly, they are soluble in aqueous alcohols, and secondly, most of the prolamins are mainly formed from repetitive amino acid sequences containing high amounts of proline and glutamine. The diversity among prolamin proteins sets high requirements for their quantification. In the present study, prolamin contents were evaluated using enzyme-linked immunosorbent assays based on ω- and R5 antibodies. In addition, assays based on A1 and G12 antibodies were used to examine the effect of deamidation on prolamin proteins. The prolamin compositions and the cross-reactivity of antibodies with prolamin groups were evaluated with electrophoretic separation and Western blotting. The results of this thesis research demonstrate that the currently used gluten analysis methods are not able to accurately quantify barley prolamins, especially when hydrolysed or mixed in oats. However, more precise results can be obtained when the standard more closely matches the sample proteins, as demonstrated with barley prolamin standards. The study also revealed that all of the harmful prolamins, i.e. wheat, barley and rye prolamins, are most efficiently extracted with 40% 1-propanol containing 1% dithiothreitol at 50 °C. The extractability of barley and rye prolamins was considerably higher with 40% 1-propanol than with 60% ethanol, which is typically used for prolamin extraction. The prolamin levels of rye were lowered by 99.5% from the original levels when an enzyme-active rye-malt sourdough system was used for prolamin degradation. Such extensive degradation of rye prolamins suggest the use of sourdough as a part of gluten-free baking. Deamidation increases the diversity of prolamins and improves their solubility and ability to form structures such as emulsions and foams. Deamidation changes the protein structure, which has consequences for antibody recognition in gluten analysis. According to the resuts of the present work, the analysis methods were not able to quantify wheat gluten after deamidation except at very high concentrations. Consequently, deamidated gluten peptides can exist in food products and remain undetected, and thus cause a risk for people with gluten intolerance. The results of this thesis demonstrate that current gluten analysis methods cannot accurately quantify prolamins in all food matrices. New information on the prolamins of rye and barley in addition to wheat prolamins is also provided in this thesis, which is essential for improving gluten analysis methods so that they can more accurately quantify prolamins from harmful cereals.
Resumo:
Eddy covariance (EC)-flux measurement technique is based on measurement of turbulent motions of air with accurate and fast measurement devices. For instance, in order to measure methane flux a fast methane gas analyser is needed which measures methane concentration at least ten times in a second in addition to a sonic anemometer, which measures the three wind components with the same sampling interval. Previously measurement of methane flux was almost impossible to carry out with EC-technique due to lack of fast enough gas analysers. However during the last decade new instruments have been developed and thus methane EC-flux measurements have become more common. Performance of four methane gas analysers suitable for eddy covariance measurements are assessed in this thesis. The assessment and comparison was performed by analysing EC-data obtained during summer 2010 (1.4.-26.10.) at Siikaneva fen. The four participating methane gas analysers are TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and Prototype-7700 (LI-COR Biosciences, USA). RMT-200 functioned most reliably throughout the measurement campaign and the corresponding methane flux data had the smallest random error. In addition, methane fluxes calculated from data obtained from G1301-f and RMT-200 agree remarkably well throughout the measurement campaign. The calculated cospectra and power spectra agree well with corresponding temperature spectra. Prototype-7700 functioned only slightly over one month in the beginning of the measurement campaign and thus its accuracy and long-term performance is difficult to assess.
Resumo:
The aim of this study was to examine the applicability of the Phonological Mean Length of Utterance (pMLU) method to the data of children acquiring Finnish, for both typically developing children and children with a Specific Language Impairment (SLI). Study I examined typically developing children at the end of the one-word stage (N=17, mean age 1;8), and Study II analysed children s (N=5) productions in a follow-up study with four assessment points (ages 2;0, 2;6, 3;0, 3;6). Study III was carried out in the form of a review article that examined recent research on the phonological development of children acquiring Finnish and compared the results with general trends and cross-linguistic findings in phonological development. Study IV included children with SLI (N=4, mean age 4;10) and age-matched peers. The analyses in Studies I, II and IV were made using the quantitative pMLU method. In the pMLU method, pMLU values are counted for both the words that the children targeted (so-called target words) and the words produced by the children. When the child s average pMLU value was divided with the average target word pMLU value, it is possible to examine that child s accuracy in producing the words with the Whole-Word Proximity (PWP) value. In addition, the number of entirely correctly produced words is counted to obtain the Whole-Word Correctness (PWC) value. Qualitative analyses were carried out in order to examine how the children s phoneme inventories and deficiencies in phonotactics would explain the observed pMLU, PWP and PWC values. The results showed that the pMLU values for children acquiring Finnish were relatively high already at the end of the one-word stage (Study I). The values were found to reflect the characteristics of the ambient language. Typological features that lead to cross-linguistic differences in pMLU values were also observed in the review article (Study III), which noted that in the course of phonological acquisition there are a large number of language-specific phenomena and processes. Study II indicated that overall the children s phonological development during the follow-up period was reflected in the pMLU, PWP and PWC values, although the method showed limitations in detecting qualitative differences between the children. Correct vowels were not scored in the pMLU counts, which led to some misleadingly high pMLU and PWP results: vowel errors were only reflected in the PWC values. Typically developing children in Study II reached the highest possible pMLU results already around age 3;6. At the same time, the differences between the children with SLI and age-matched peers in the pMLU values were very prominent (Study IV). The values for the children with SLI were similar to the ones reported for two-year-old children. Qualitative analyses revealed that the phonologies of the children with SLI largely resembled the ones of younger, typically developing children. However, unusual errors were also witnessed (e.g., vowel errors, omissions of word-initial stops, consonants added to the initial position in words beginning with a vowel). This dissertation provides an application of a new tool for quantitative phonological assessment and analysis in children acquiring Finnish. The preliminary results suggest that, with some modifications, the pMLU method can be used to assess children s phonological development and that it has some advantages compared to the earlier, segment-oriented approaches. Qualitative analyses complemented the pMLU s observations on the children s phonologies. More research is needed in order to verify the levels of the pMLU, PWP and PWC values in children acquiring Finnish.
Resumo:
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.