895 resultados para system performance evaluation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper sought to evaluate the behavior of an upflow Anaerobic-Aerobic Fixed Bed Reactor (AAFBR) in the treatment of cattle slaughterhouse effluent and determine apparent kinetic constants of the organic matter removal. The AAFBR was operated with no recirculation (Phase I) and with 50% of effluent recirculation (Phase II), with θ of 11h and 8h. In terms of pH, bicarbonate alkalinity and volatile acids, the results indicated the reactor ability to maintain favorable conditions for the biological processes involved in the organic matter removal in both operational phases. The average removal efficiencies of organic matter along the reactor height, expressed in terms of raw COD, were 49% and 68% in Phase I and 54% and 86% in Phase II for θ of 11h and 8h, respectively. The results of the filtered COD indicated removal efficiency of 52% and k = 0.0857h-1 to θ of 11h and 42% and k = 0.0880h-1 to θ of 8h in the Phase I. In Phase II, the removal efficiencies were 59% and 51% to θ of 11h and 8h, with k = 0.1238h-1 and k = 0.1075 h-1, respectively. The first order kinetic model showed good adjustment and described adequately the kinetics of organic matter removal for θ of 11h, with r² equal to 0.9734 and 0.9591 to the Phases I and II, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Corporate events as an effective part of marketing communications strategy seem to be underestimated in Finnish companies. In the rest of the Europe and the USA, investments in events are increasing, and their share of the marketing budget is significant. The growth of the industry may be explained by the numerous advantages and opportunities that events provide for attendees, such as face-to-face marketing, enhancing corporate image, building relationships, increasing sales, and gathering information. In order to maximize these benefits and return on investment, specific measurement strategies are required, yet there seems to exist a lack of understanding of how event performance should be perceived or evaluated. To address this research gap, this research attempts to describe the perceptions of and strategies for evaluating corporate event performance in the Finnish events industry. First, corporate events are discussed in terms of definitions and characteristics, typologies, and their role in marketing communications. Second, different theories on evaluating corporate event performance are presented and analyzed. Third, a conceptual model is presented based on the literature review, which serves as a basis for the empirical research conducted as an online questionnaire. The empirical findings are to a great extent in line with the existing literature, suggesting that there remains a lack of understanding corporate event performance evaluation, and challenges arise in determining appropriate measurement procedures for it. Setting clear objectives for events is a significant aspect of the evaluation process, since the outcomes of events are usually evaluated against the preset objectives. The respondent companies utilize many of the individual techniques that were recognized in theory, such as calculating the number of sales leads and delegates. However, some of the measurement tools may require further investments and resources, thus restricting their application especially in smaller companies. In addition, there seems to be a lack of knowledge of the most appropriate methods in different contexts, which take into account the characteristics of the organizing party as well as the size and nature of the event. The lack of inhouse expertise enhances the need for third-party service-providers in solving problems of corporate event measurement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During vehicle deceleration due to braking there is friction between the lining surface and the brake drum or disc. In this process the kinetic energy of vehicle is turned into thermal energy that raises temperature of the components. The heating of the brake system in the course of braking is a great problem, because besides damaging the system, it may also affect the wheel and tire, which can cause accidents. In search of the best configuration that considers the true conditions of use, without passing the safety limits, models and formulations are presented with respect to the brake system, considering different braking conditions and kinds of brakes. Some modeling was analyzed using well-known methods. The flat plate model considering energy conservation was applied to a bus, using for this a computer program. The vehicle is simulated to undergo an emergency braking, considering the change of temperature on the lining-drum. The results include deceleration, braking efficiency, wheel resistance, normal reaction on the tires and the coefficient of adhesion. Some of the results were compared with dynamometer tests made by FRAS-LE and others were compared with track tests made by Mercedes-Benz. The convergence between the results and the tests is sufficient to validate the mathematical model. The computer program makes it possible to simulate the brake system performance in the vehicle. It assists the designer during the development phase and reduces track tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lignocellulosic biomasses (e.g., wood and straws) are a potential renewable source for the production of a wide variety of chemicals that could be used to replace those currently produced by petrochemical industry. This would lead to lower greenhouse gas emissions and waste amounts, and to economical savings. There are many possible pathways available for the manufacturing of chemicals from lignocellulosic biomasses. One option is to hydrolyze the cellulose and hemicelluloses of these biomasses into monosaccharides using concentrated sulfuric acid as catalyst. This process is an efficient method for producing monosaccharides which are valuable platforn chemicals. Also other valuable products are formed in the hydrolysis. Unfortunately, the concentrated acid hydrolysis has been deemed unfeasible mainly due to high chemical consumption resulting from the need to remove sulfuric acid from the obtained hydrolysates prior to the downstream processing of the monosaccharides. Traditionally, this has been done by neutralization with lime. This, however, results in high chemical consumption. In addition, the by-products formed in the hydrolysis are not removed and may, thus, hinder the monosaccharide processing. In order to improve the feasibility of the concentrated acid hydrolysis, the chemical consumption should be decreased by recycling of sulfuric acid without neutralization. Furthermore, the monosaccharides and the other products formed in the hydrolysis should be recovered selectively for efficient downstream processing. The selective recovery of the hydrolysis by-products would have additional economical benefits on the process due to their high value. In this work, the use of chromatographic fractionation for the recycling of sulfuric acid and the selective recovery of the main components from the hydrolysates formed in the concentrated acid hydrolysis was investigated. Chromatographic fractionation based on the electrolyte exclusion with gel type strong acid cation exchange resins in acid (H+) form as a stationary phase was studied. A systematic experimental and model-based study regarding the separation task at hand was conducted. The phenomena affecting the separation were determined and their effects elucidated. Mathematical models that take accurately into account these phenomena were derived and used in the simulation of the fractionation process. The main components of the concentrated acid hydrolysates (sulfuric acid, monosaccharides, and acetic acid) were included into this model. Performance of the fractionation process was investigated experimentally and by simulations. Use of different process options was also studied. Sulfuric acid was found to have a significant co-operative effect on the sorption of the other components. This brings about interesting and beneficial effects in the column operations. It is especially beneficial for the separation of sulfuric acid and the monosaccharides. Two different approaches for the modelling of the sorption equilibria were investigated in this work: a simple empirical approach and a thermodynamically consistent approach (the Adsorbed Solution theory). Accurate modelling of the phenomena observed in this work was found to be possible using the simple empirical models. The use of the Adsorbed Solution theory is complicated by the nature of the theory and the complexity of the studied system. In addition to the sorption models, a dynamic column model that takes into account the volume changes of the gel type resins as changing resin bed porosity was also derived. Using the chromatography, all the main components of the hydrolysates can be recovered selectively, and the sulfuric acid consumption of the hydrolysis process can be lowered considerably. Investigation of the performance of the chromatographic fractionation showed that the highest separation efficiency in this separation task is obtained with a gel type resin with a high crosslinking degree (8 wt. %); especially when the hydrolysates contain high amounts of acetic acid. In addition, the concentrated acid hydrolysis should be done with as low sulfuric acid concentration as possible to obtain good separation performance. The column loading and flow rate also have large effects on the performance. In this work, it was demonstrated that when recycling of the fractions obtained in the chromatographic fractionation are recycled to preceding unit operations these unit operations should included in the performance evaluation of the fractionation. When this was done, the separation performance and the feasibility of the concentrated acid hydrolysis process were found to improve considerably. Use of multi-column chromatographic fractionation processes, the Japan Organo process and the Multi-Column Recycling Chromatography process, was also investigated. In the studied case, neither of these processes could compete with the single-column batch process in the productivity. However, due to internal recycling steps, the Multi-Column Recycling Chromatography was found to be superior to the batch process when the product yield and the eluent consumption were taken into account.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In view of the importance of anticipating the occurrence of critical situations in medicine, we propose the use of a fuzzy expert system to predict the need for advanced neonatal resuscitation efforts in the delivery room. This system relates the maternal medical, obstetric and neonatal characteristics to the clinical conditions of the newborn, providing a risk measurement of need of advanced neonatal resuscitation measures. It is structured as a fuzzy composition developed on the basis of the subjective perception of danger of nine neonatologists facing 61 antenatal and intrapartum clinical situations which provide a degree of association with the risk of occurrence of perinatal asphyxia. The resulting relational matrix describes the association between clinical factors and risk of perinatal asphyxia. Analyzing the inputs of the presence or absence of all 61 clinical factors, the system returns the rate of risk of perinatal asphyxia as output. A prospectively collected series of 304 cases of perinatal care was analyzed to ascertain system performance. The fuzzy expert system presented a sensitivity of 76.5% and specificity of 94.8% in the identification of the need for advanced neonatal resuscitation measures, considering a cut-off value of 5 on a scale ranging from 0 to 10. The area under the receiver operating characteristic curve was 0.93. The identification of risk situations plays an important role in the planning of health care. These preliminary results encourage us to develop further studies and to refine this model, which is intended to implement an auxiliary system able to help health care staff to make decisions in perinatal care.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tässä työssä esitellään aurinkosähköjärjestelmien rakenne, toiminta ja niille sopivia käyttökohteita. Työn tavoitteena on arvioida teknillistaloudellisesti rakennukseen integroitujen aurinkosähköjärjestelmien soveltuvuutta Pohjoismaisiin olosuhteisiin. Tekninen arviointi toteutetaan pohjautuen kirjallisuuteen, käytännön analysointiin ja simuloituihin tuloksiin. Taloudellinen arviointi sisältää lisäksi myös laskennallista analysointia. Aurinkosähköjärjestelmän toiminnan arvioinnissa päädyttiin hyödyntämään aiemmin aurinkosähköjärjestelmien suorituskyvystä julkaistuja materiaaleja. Käytössä olevien resurssien rajallisuus ei mahdollistanut tarpeeksi laajamittaisten suorituskykytestien toteuttamista. Teknisen arvioinnin perusteella saatiin selville merkittävimpien tekijöiden vaikutus rakennukseen integroitujen aurinkosähköjärjestelmien toimintaan. Teknillistaloudellisen arvioinnin perusteella julkisivumateriaalien korvaaminen aurinkopaneeleilla tulee harkita tapauskohtaisesti. Työ sisältää myös katsauksen olemassa olevista teknisistä ratkaisuista.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper develops a model of short-range ballistic missile defense and uses it to study the performance of Israel’s Iron Dome system. The deterministic base model allows for inaccurate missiles, unsuccessful interceptions, and civil defense. Model enhancements consider the trade-offs in attacking the interception system, the difficulties faced by militants in assembling large salvos, and the effects of imperfect missile classification by the defender. A stochastic model is also developed. Analysis shows that system performance can be highly sensitive to the missile salvo size, and that systems with higher interception rates are more “fragile” when overloaded. The model is calibrated using publically available data about Iron Dome’s use during Operation Pillar of Defense in November 2012. If the systems performed as claimed, they saved Israel an estimated 1778 casualties and $80 million in property damage, and thereby made preemptive strikes on Gaza about 8 times less valuable to Israel. Gaza militants could have inflicted far more damage by grouping their rockets into large salvos, but this may have been difficult given Israel’s suppression efforts. Counter-battery fire by the militants is unlikely to be worthwhile unless they can obtain much more accurate missiles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La thèse a pour objectif d’étudier l’influence du financement des soins de santé sur la performance des systèmes de soins compte tenu des caractéristiques organisationnelles sanitaires des systèmes. Elle s’articule autour des trois objectifs suivants : 1) caractériser le financement des soins de santé à travers les différents modèles émergeant des pays à revenu élevé ; 2) apprécier la performance des systèmes de soins en établissant les divers profils apparaissant dans ces mêmes pays ; 3) examiner le lien entre le financement et la performance en tenant compte du pouvoir modérateur du contexte organisationnel des soins. Inspirée du processus de circulation de l’argent dans le système de soins, l’approche a d’abord consisté à classer les pays étudiés – par une analyse configurationnelle opérationnalisée par les analyses de correspondance multiples (ACM) et de classification hiérarchique ascendante (CHA) – dans des modèles types, chacun représentant une configuration particulière de processus de financement des soins de santé (article 1). Appliquée aux données recueillies auprès des 27 pays de l’OCDE à revenu élevé via les rapports Health Care in Transition des systèmes de santé des pays produits par le bureau Européen de l’OMS, la banque de données Eco-Santé OCDE 2007 et les statistiques de l’OMS 2008, les analyses ont révélé cinq modèles de financement. Ils se distinguent selon les fonctions de collecte de l’argent dans le système (prélèvement), de mise en commun de l’argent collecté (stockage), de la répartition de l’argent collecté et stocké (allocation) et du processus de paiement des professionnels et des établissements de santé (paiement). Les modèles ainsi développés, qui vont au-delà du processus unique de collecte de l’argent, donnent un portrait plus complet du processus de financement des soins de santé. Ils permettent ainsi une compréhension de la cohérence interne existant entre les fonctions du financement lors d’un éventuel changement de mode de financement dans un pays. Dans un deuxième temps, nous appuyant sur une conception multidimensionnelle de la performance des systèmes, nous avons classé les pays : premièrement, selon leur niveau en termes de ressources mobilisées, de services produits et de résultats de santé atteints (définissant la performance absolue) ; deuxièmement, selon les efforts qu’ils fournissent pour atteindre un niveau élevé de résultats de santé proportionnellement aux ressources mobilisées et aux services produits en termes d’efficience, d’efficacité et de productivité (définissant ainsi la performance relative) ; et troisièmement, selon les profils types de performance globale émergeant en tenant compte simultanément des niveaux de performance absolue et relative (article 2). Les analyses effectuées sur les données collectées auprès des mêmes 27 pays précédents ont dégagé quatre profils de performance qui se différencient selon leur niveau de performance multidimensionnelle et globale. Les résultats ainsi obtenus permettent d’effectuer une comparaison entre les niveaux globaux de performance des systèmes de soins. Pour terminer, afin de répondre à la question de savoir quel mode – ou quels modes – de financement générerait de meilleurs résultats de performance, et ce, dans quel contexte organisationnel de soins, une analyse plus fine des relations entre le financement et la performance (tous définis comme précédemment) compte tenu des caractéristiques organisationnelles sanitaires a été réalisée (article 3). Les résultats montrent qu’il n’existe presque aucune relation directe entre le financement et la performance. Toutefois, lorsque le financement interagit avec le contexte organisationnel sanitaire pour appréhender le niveau de performance des systèmes, des relations pertinentes et révélatrices apparaissent. Ainsi, certains modes de financement semblent plus attrayants que d’autres en termes de performance dans des contextes organisationnels sanitaires différents. Les résultats permettent ainsi à tous les acteurs du système de comprendre qu’il n’existe qu’une influence indirecte du financement de la santé sur la performance des systèmes de soins due à l’interaction du financement avec le contexte organisationnel sanitaire. L’une des originalités de cette thèse tient au fait que très peu de travaux ont tenté d’opérationnaliser de façon multidimensionnelle les concepts de financement et de performance avant d’analyser les associations susceptibles d’exister entre eux. En outre, alors que la pertinence de la prise en compte des caractéristiques du contexte organisationnel dans la mise en place des réformes des systèmes de soins est au coeur des préoccupations, ce travail est l’un des premiers à analyser l’influence de l’interaction entre le financement et le contexte organisationnel sanitaire sur la performance des systèmes de soins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Au Québec, des réseaux ont été implantés afin de contrer le manque d’intégration des services offerts aux personnes vivant avec un traumatisme cranio-cérébral (TCC). Toutefois, l’évaluation de leur performance est actuellement limitée par l’absence de description et de conceptualisation de leur performance. Le but de cette thèse est de poser les bases préliminaires d’un processus d’évaluation de la performance des réseaux TCC. Nos objectifs sont de 1) décrire les organisations, la nature et la qualité des liens ainsi que la configuration d’un réseau TCC; 2) connaître les perceptions des constituants du réseau quant aux forces, faiblesses, opportunités et menaces propres à cette forme organisationnelle; 3) documenter et comparer les perceptions de répondants provenant de divers types d’organisations quant à l’importance de 16 dimensions du concept de performance pour l’évaluation des réseaux TCC; 4) réconcilier les perceptions différentes afin de proposer une hiérarchisation consensuelle des dimensions de la performance. En utilisant la méthode de l’analyse du réseau social, nous avons décrit un réseau de petite taille, modérément dense et essentiellement organisé autour de quatre organisations fortement centralisées. Les constituants ont décrit leur réseau comme présentant autant de forces que de faiblesses. La majorité des enjeux rapportés étaient relatifs à l’Adaptation du réseau à son environnement et au Maintien des Valeurs. Par ailleurs, les représentants des 46 organisations membre d’un réseau TCC ont perçu les dimensions de la performance relatives à l’Atteinte des buts comme étant plus importantes que celles relatives aux Processus. La Capacité d’attirer la clientèle, la Continuité et la Capacité de s’adapter pour répondre aux besoins des clients étaient les trois dimensions les plus importantes, tandis que la Capacité de s’adapter aux exigences et aux tendances et la Quantité de soins et de services étaient les moins importants. Les groupes TRIAGE ont permis aux constituants de s’entendre sur l’importance accordée à chaque dimension et d’uniformiser leurs différentes perspectives. Bien que plusieurs étapes demeurent à franchir pour actualiser le processus d’évaluation de la performance des réseaux TCC québécois, nos travaux permettent de poser des bases scientifiques solides qui optimisent la pertinence et l’appropriation des résultats pour les étapes ultérieures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Travail dirigé présenté à la Faculté des sciences infirmières en vue de l’obtention du grade de Maître ès sciences (M.Sc.) en sciences infirmières option administration en sciences infirmières

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Queueing system in which arriving customers who find all servers and waiting positions (if any) occupied many retry for service after a period of time are retrial queues or queues with repeated attempts. This study deals with two objectives one is to introduce orbital search in retrial queueing models which allows to minimize the idle time of the server. If the holding costs and cost of using the search of customers will be introduced, the results we obtained can be used for the optimal tuning of the parameters of the search mechanism. The second one is to provide insight of the link between the corresponding retrial queue and the classical queue. At the end we observe that when the search probability Pj = 1 for all j, the model reduces to the classical queue and when Pj = 0 for all j, the model becomes the retrial queue. It discusses the performance evaluation of single-server retrial queue. It was determined by using Poisson process. Then it discuss the structure of the busy period and its analysis interms of Laplace transforms and also provides a direct method of evaluation for the first and second moments of the busy period. Then it discusses the M/ PH/1 retrial queue with disaster to the unit in service and orbital search, and a multi-server retrial queueing model (MAP/M/c) with search of customers from the orbit. MAP is convenient tool to model both renewal and non-renewal arrivals. Finally the present model deals with back and forth movement between classical queue and retrial queue. In this model when orbit size increases, retrial rate also correspondingly increases thereby reducing the idle time of the server between services

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the fastest expanding areas of computer exploitation is in embedded systems, whose prime function is not that of computing, but which nevertheless require information processing in order to carry out their prime function. Advances in hardware technology have made multi microprocessor systems a viable alternative to uniprocessor systems in many embedded application areas. This thesis reports the results of investigations carried out on multi microprocessors oriented towards embedded applications, with a view to enhancing throughput and reliability. An ideal controller for multiprocessor operation is developed which would smoothen sharing of routines and enable more powerful and efficient code I data interchange. Results of performance evaluation are appended.A typical application scenario is presented, which calls for classifying tasks based on characteristic features that were identified. The different classes are introduced along with a partitioned storage scheme. Theoretical analysis is also given. A review of schemes available for reducing disc access time is carried out and a new scheme presented. This is found to speed up data base transactions in embedded systems. The significance of software maintenance and adaptation in such applications is highlighted. A novel scheme of prov1d1ng a maintenance folio to system firmware is presented, alongwith experimental results. Processing reliability can be enhanced if facility exists to check if a particular instruction in a stream is appropriate. Likelihood of occurrence of a particular instruction would be more prudent if number of instructions in the set is less. A new organisation is derived to form the basement for further work. Some early results that would help steer the course of the work are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.