999 resultados para ALTERED SUPPORT
Resumo:
Objective To determine scoliosis curve types using non invasive surface acquisition, without prior knowledge from X-ray data. Methods Classification of scoliosis deformities according to curve type is used in the clinical management of scoliotic patients. In this work, we propose a robust system that can determine the scoliosis curve type from non invasive acquisition of the 3D back surface of the patients. The 3D image of the surface of the trunk is divided into patches and local geometric descriptors characterizing the back surface are computed from each patch and constitute the features. We reduce the dimensionality by using principal component analysis and retain 53 components using an overlap criterion combined with the total variance in the observed variables. In this work, a multi-class classifier is built with least-squares support vector machines (LS-SVM). The original LS-SVM formulation was modified by weighting the positive and negative samples differently and a new kernel was designed in order to achieve a robust classifier. The proposed system is validated using data from 165 patients with different scoliosis curve types. The results of our non invasive classification were compared with those obtained by an expert using X-ray images. Results The average rate of successful classification was computed using a leave-one-out cross-validation procedure. The overall accuracy of the system was 95%. As for the correct classification rates per class, we obtained 96%, 84% and 97% for the thoracic, double major and lumbar/thoracolumbar curve types, respectively. Conclusion This study shows that it is possible to find a relationship between the internal deformity and the back surface deformity in scoliosis with machine learning methods. The proposed system uses non invasive surface acquisition, which is safe for the patient as it involves no radiation. Also, the design of a specific kernel improved classification performance.
Resumo:
Glucoamylase was immobilized on acid activated montmorillonite clay via two different procedures namely adsorption and covalent binding. The immobilized enzymes were characterized by XRD, NMR and N2 adsorption measurements and the activity of immobilized glucoamylase for starch hydrolysis was determined in a batch reactor. XRD shows intercalation of enzyme into the clay matrix during both immobilization procedures. Intercalation occurs via the side chains of the amino acid residues, the entire polypeptide backbone being situated at the periphery of the clay matrix. 27Al NMR studies revealed the different nature of interaction of enzyme with the support for both immobilization techniques. N2 adsorption measurements indicated a sharp drop in surface area and pore volume for the covalently bound glucoamylase that suggested severe pore blockage. Activity studies were performed in a batch reactor. The adsorbed and covalently bound glucoamylase retained 49% and 66% activity of the free enzyme respectively. They showed enhanced pH and thermal stabilities. The immobilized enzymes also followed Michaelis–Menten kinetics. Km was greater than the free enzyme that was attributed to an effect of immobilization. The immobilized preparations demonstrated increased reusability as well as storage stability.
Resumo:
Three enzymes, α-amylase, glucoamylase and invertase, were immobilized on acid activated montmorillonite K 10 via two independent techniques, adsorption and covalent binding. The immobilized enzymes were characterized by XRD, N2 adsorption measurements and 27Al MAS-NMR spectroscopy. The XRD patterns showed that all enzymes were intercalated into the clay inter-layer space. The entire protein backbone was situated at the periphery of the clay matrix. Intercalation occurred through the side chains of the amino acid residues. A decrease in surface area and pore volume upon immobilization supported this observation. The extent of intercalation was greater for the covalently bound systems. NMR data showed that tetrahedral Al species were involved during enzyme adsorption whereas octahedral Al was involved during covalent binding. The immobilized enzymes demonstrated enhanced storage stability. While the free enzymes lost all activity within a period of 10 days, the immobilized forms retained appreciable activity even after 30 days of storage. Reusability also improved upon immobilization. Here again, covalently bound enzymes exhibited better characteristics than their adsorbed counterparts. The immobilized enzymes could be successfully used continuously in the packed bed reactor for about 96 hours without much loss in activity. Immobilized glucoamylase demonstrated the best results.
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576
Resumo:
Process parameters influencing e-glutaminase production by marine Vibrio costicola in solid state fermentation (SSF) using polystyrene as an inert support were optimised. Maximal enzyme yield (157 U/g dry substrate) was obtained at 2% (w/w) t:glutamine, 35°C and pH 7.0 after 24 h. Maltose and potassium dihydrogen phosphate at 1% (w/w) concentration enhanced enzyme yield by 23 and 18%, respectively, while nitrogen sources had an inhibitory effect. Leachate with high specific activity for glutaminase (4.2 U/mg protein) and low viscosity (0-966 Ns/m 2) was recovered from the polystyrene SSF system
Resumo:
Accurate data of the natural conditions and agricultural systems with a good spatial resolution are a key factor to tackle food insecurity in developing countries. A broad variety of approaches exists to achieve precise data and information about agriculture. One system, especially developed for smallholder agriculture in East Africa, is the Farm Management Handbook of Kenya. It was first published in 1982/83 and fully revised in 2012, now containing 7 volumes. The handbooks contain detailed information on climate, soils, suitable crops and soil care based on scientific research results of the last 30 years. The density of facts leads to time consuming extraction of all necessary information. In this study we analyse the user needs and necessary components of a system for decision support for smallholder farming in Kenya based on a geographical information system (GIS). Required data sources were identified, as well as essential functions of the system. We analysed the results of our survey conducted in 2012 and early 2013 among agricultural officers. The monitoring of user needs and the problem of non-adaptability of an agricultural information system on the level of extension officers in Kenya are the central objectives. The outcomes of the survey suggest the establishment of a decision support tool based on already available open source GIS components. The system should include functionalities to show general information for a specific location and should provide precise recommendations about suitable crops and management options to support agricultural guidance on farm level.
Resumo:
Die Wissenschaft weist im Zuge der Entwicklung von der Industrie- zu einer Wissensgesellschaft einschneidende Veränderungen in der Wissensordnung auf, welche sich bis hin zu einem zunehmenden Verlust der wissenschaftlichen Selbststeuerungsmechanismen bemerkbar machen und einen veränderten Umgang mit dem generierten Wissensschatz erfordern. Nicht nur Änderungen in der Wissensordnung und -produktion stellen die Psychoanalyse vor neue Herausforderungen: In den letzten Jahrzehnten geriet sie als Wissenschaft und Behandlungsverfahren zunehmend in die Kritik und reagierte mit einer konstruktiven Diskussion um ein dem Forschungsgegenstand – die Untersuchung unbewusster Prozesse und Fantasien – adäquates psychoanalytisches Forschungsverständnis. Die Auseinandersetzung mit Forderungen gesellschaftlicher Geldgeber, politischer Vertreter und Interessensgruppen wie auch der wissenschaftlichen Community stellt die Psychoanalyse vor besondere Herausforderungen. Um wissenschaftsexternen wie -internen Gütekriterien zu genügen, ist häufig ein hoher personeller, materieller, finanzieller, methodischer wie organisatorischer Aufwand unabdingbar, wie das Beispiel des psychoanalytischen Forschungsinstitutes Sigmund-Freud-Institut zeigt. Der steigende Aufwand schlägt sich in einer zunehmenden Komplexität des Forschungsprozesses nieder, die unter anderem in den vielschichtigen Fragestellungen und Zielsetzungen, dem vermehrt interdisziplinären, vernetzten Charakter, dem Umgang mit dem umfangreichen, hochspezialisierten Wissen, der Methodenvielfalt, etc. begründet liegt. Um jener Komplexität des Forschungsprozesses gerecht zu werden, ist es zunehmend erforderlich, Wege des Wissensmanagement zu beschreiten. Tools wie z. B. Mapping-Verfahren stellen unterstützende Werkzeuge des Wissensmanagements dar, um den Herausforderungen des Forschungsprozesses zu begegnen. In der vorliegenden Arbeit werden zunächst die veränderten Forschungsbedingungen und ihre Auswirkungen auf die Komplexität des Forschungsprozesses - insbesondere auch des psychoanalytischen Forschungsprozesses - reflektiert. Die mit der wachsenden Komplexität einhergehenden Schwierigkeiten und Herausforderungen werden am Beispiel eines interdisziplinär ausgerichteten EU-Forschungsprojektes näher illustriert. Um dieser wachsenden Komplexität psychoanalytischer Forschung erfolgreich zu begegnen, wurden in verschiedenen Forschungsprojekten am Sigmund-Freud-Institut Wissensmanagement-Maßnahmen ergriffen. In der vorliegenden Arbeit wird daher in einem zweiten Teil zunächst auf theoretische Aspekte des Wissensmanagements eingegangen, die die Grundlage der eingesetzten Wissensmanagement-Instrumente bildeten. Dabei spielen insbesondere psychologische Aspekte des Wissensmanagements eine zentrale Rolle. Zudem werden die konkreten Wissensmanagement-Tools vorgestellt, die in den verschiedenen Forschungsprojekten zum Einsatz kamen, um der wachsenden Komplexität psychoanalytischer Forschung zu begegnen. Abschließend werden die Hauptthesen der vorliegenden Arbeit noch einmal reflektiert und die geschilderten Techniken des Wissensmanagements im Hinblick auf ihre Vor- und Nachteile kritisch diskutiert.
Resumo:
Agriculture in semi-arid and arid regions is constantly gaining importance for the security of the nutrition of humankind because of the rapid population growth. At the same time, especially these regions are more and more endangered by soil degradation, limited resources and extreme climatic conditions. One way to retain soil fertility under these conditions in the long run is to increase the soil organic matter. Thus, a two-year field experiment was conducted to test the efficiency of activated charcoal and quebracho tannin extract as stabilizers of soil organic matter on a sandy soil low in nutrients in Northern Oman. Both activated charcoal and quebracho tannin extract were either fed to goats and after defecation applied to the soil or directly applied to the soil in combination with dried goat manure. Regardless of the application method, both additives reduced decomposition of soil-applied organic matter and thus stabilized and increased soil organic carbon. The nutrient release from goat manure was altered by the application of activated charcoal and quebracho tannin extract as well, however, nutrient release was not always slowed down. While activated charcoal fed to goats, was more effective in stabilising soil organic matter and in reducing nutrient release than mixing it, for quebracho tannin extract the opposite was the case. Moreover, the efficiency of the additives was influenced by the cultivated crop (sweet corn and radish), leading to unexplained interactions. The reduced nutrient release caused by the stabilization of the organic matter might be the reason for the reduced yields for sweet corn caused by the application of manure amended with activated charcoal and quebracho tannin extract. Radish, on the other hand, was only inhibited by the presence of quebracho tannin extract but not by activated charcoal. This might be caused by a possible allelopathic effect of tannins on crops. To understand the mechanisms behind the changes in manure, in the soil, in the mineralisation and the plant development and to resolve detrimental effects, further research as recommended in this dissertation is necessary. Particularly in developing countries poor in resources and capital, feeding charcoal or tannins to animals and using their faeces as manure may be promising to increase soil fertility, sequester carbon and reduce nutrient losses, when yield reductions can be resolved.
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.
Resumo:
Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.