889 resultados para Support Vector Machine
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.
Resumo:
Diplomityön tarkoituksena oli kuidutusrumpulaitteiston käytön- ja kannatuksen kehittä-minen. Työ rajattiin laajuutensa vuoksi koskemaan tuotesarjan viittä pienintä kokoa. Työn alkuosassa käsitellään kuidutuksen teoriaa ja siihen soveltuvia laitteistoja. Käytön suunnittelun kannalta olennaista käynnistystehon tarvetta on tarkasteltu lähtökohdaisesti fysiikan avulla. Perustietoja teorialle on haettu aiemmista tutkimuksista sekä kirjallisuu-desta. Tarkastelun tuloksena teoriaa on kehitty ja se on saatu vastaamaan todellisuutta aiempaa paremmin. Kannatuksen ja käytön toteuttamisvaihtoja etsittäessä on käytetty systemaattisen koneen-suunnittelun keinoja. Saatuja ideoita on arvioitu teknillis-taloudellisin perustein ja näistä on valittu parhaat vaihtoehdot jatkokehitykseen. Jatkokehitysvaiheessa ratkaisuvaihto-ehtoja on tarkasteltu komponenttitasolla ja näistä on tehty yksityiskohtaiset kustannus-laskelmat. Työn tuloksena on esitetty kannatuksen ja käytön toteutusvaihtoehto, jonka avulla voidaan saavuttaa merkittäviä kustannussäästöjä. Korkea, 30 prosentin kustannussäästö-tavoite saavutettiin.
Resumo:
Lime kiln is used as a part of the modern kraft pulp process in order to produce burnt lime from lime mud. This rotating kiln is supported by support rollers, which are traditionally supported by journal bearings. Since the continuous growth in the production of pulp mills requires larger lime kilns, the traditional bearing construction has become unreliable. The main problem especially involves the running-in phase of the bearings. In the present thesis, a new type of support roller was developed by using the systematic approach of machine design. Structural analysis was conducted on the critical parts of the selected structure by the finite element method. The operation of hydrodynamic bearings was examined by analytical methods. As a result of this work, a new type of support for rotating kilns was designed, which is more reliable and easier to service. A new support roller geometry is described, which pro¬vides for significant cost savings.
Resumo:
Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
This master’s thesis mainly focuses on the design requirements of an Electric drive for Hybrid car application and its control strategy to achieve a wide speed range. It also emphasises how the control and performance requirements are transformed into its design variables. A parallel hybrid topology is considered where an IC engine and an electric drive share a common crank shaft. A permanent magnet synchronous machine (PMSM) is used as an electric drive machine. Performance requirements are converted into Machine design variables using the vector model of PMSM. Main dimensions of the machine are arrived using analytical approach and Finite Element Analysis (FEA) is used to verify the design and performance. Vector control algorithm was used to control the machine. The control algorithm was tested in a low power PMSM using an embedded controller. A prototype of 10 kW PMSM was built according to the design values. The prototype was tested in the laboratory using a high power converter. Tests were carried out to verify different operating modes. The results were in agreement with the calculations.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Exclusion of the transcription factor Max from the nucleus of retinal ganglion cells is an early, caspase-independent event of programmed cell death following damage to the optic axons. To test whether the loss of nuclear Max leads to a reduction in neuroprotection, we developed a procedure to overexpress Max protein in rat retinal tissue in vivo. A recombinant adeno-associated viral vector (rAAV) containing the max gene was constructed, and its efficiency was confirmed by transduction of HEK-293 cells. Retinal ganglion cells were accessed in vivo through intravitreal injections of the vector in rats. Overexpression of Max in ganglion cells was detected by immunohistochemistry at 2 weeks following rAAV injection. In retinal explants, the preparation of which causes damage to the optic axons, Max immunoreactivity was increased after 30 h in vitro, and correlated with the preservation of a healthy morphology in ganglion cells. The data show that the rAAV vector efficiently expresses Max in mammalian retinal ganglion cells, and support the hypothesis that the Max protein plays a protective role for retinal neurons.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
Due to the emergence of multiple language support on the Internet, machine translation (MT) technologies are indispensable to the communication between speakers using different languages. Recent research works have started to explore tree-based machine translation systems with syntactical and morphological information. This work aims the development of Syntactic Based Machine Translation from English to Malayalam by adding different case information during translation. The system identifies general rules for various sentence patterns in English. These rules are generated using the Parts Of Speech (POS) tag information of the texts. Word Reordering based on the Syntax Tree is used to improve the translation quality of the system. The system used Bilingual English –Malayalam dictionary for translation.
Resumo:
Exercises and solutions about vector fields. Diagrams for the questions are all together in the support.zip file, as .eps files
Resumo:
Understanding how and why the capability of one set of business resources, its structural arrangements and mechanisms compared to another works can provide competitive advantage in terms of new business processes and product and service development. However, most business models of capability are descriptive and lack formal modelling language to qualitatively and quantifiably compare capabilities, Gibson’s theory of affordance, the potential for action, provides a formal basis for a more robust and quantitative model, but most formal affordance models are complex and abstract and lack support for real-world applications. We aim to understand the ‘how’ and ‘why’ of business capability, by developing a quantitative and qualitative model that underpins earlier work on Capability-Affordance Modelling – CAM. This paper integrates an affordance based capability model and the formalism of Coloured Petri Nets to develop a simulation model. Using the model, we show how capability depends on the space time path of interacting resources, the mechanism of transition and specific critical affordance factors relating to the values of the variables for resources, people and physical objects. We show how the model can identify the capabilities of resources to enable the capability to inject a drug and anaesthetise a patient.
Resumo:
Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The problem of a fermion subject to a general mixing of vector and scalar potentials in a two-dimensional world is mapped into a Sturm-Liouville problem. Isolated bounded solutions are also searched. For the specific case of an inversely linear potential, which gives rise to an effective Kratzer potential in the Sturm-Liouville problem, exact bounded solutions are found in closed form. The case of a pure scalar potential with their isolated zero-energy solutions, already analyzed in a previous work, is obtained as a particular case. The behavior of the upper and lower components of the Dirac spinor is discussed in detail and some unusual results are revealed. The nonrelativistic limit of our results adds a new support to the conclusion that even-parity solutions to the nonrelativistic one-dimensional hydrogen atom do not exist. (c) 2004 Elsevier B.V. All rights reserved.