974 resultados para Adjusting computing methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In metallurgic plants a high quality metal production is always required. Nowadays soft computing applications are more often used for automation of manufacturing process and quality control instead of mechanical techniques. In this thesis an overview of soft computing methods presents. As an example of soft computing application, an effective model of fuzzy expert system for the automotive quality control of steel degassing process was developed. The purpose of this work is to describe the fuzzy relations as quality hypersurfaces by varying number of linguistic variables and fuzzy sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il documento tratta la famiglia di metodologie di allenamento e sfruttamento delle reti neurali ricorrenti nota sotto il nome di Reservoir Computing. Viene affrontata un'introduzione sul Machine Learning in generale per fornire tutti gli strumenti necessari a comprendere l'argomento. Successivamente, vengono dati dettagli implementativi ed analisi dei vantaggi e punti deboli dei vari approcci, il tutto con supporto di codice ed immagini esplicative. Nel finale vengono tratte conclusioni sugli approcci, su quanto migliorabile e sulle applicazioni pratiche.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: Pharmacovigilance methods have advanced greatly during the last decades, making post-market drug assessment an essential drug evaluation component. These methods mainly rely on the use of spontaneous reporting systems and health information databases to collect expertise from huge amounts of real-world reports. The EU-ADR Web Platform was built to further facilitate accessing, monitoring and exploring these data, enabling an in-depth analysis of adverse drug reactions risks.METHODS: The EU-ADR Web Platform exploits the wealth of data collected within a large-scale European initiative, the EU-ADR project. Millions of electronic health records, provided by national health agencies, are mined for specific drug events, which are correlated with literature, protein and pathway data, resulting in a rich drug-event dataset. Next, advanced distributed computing methods are tailored to coordinate the execution of data-mining and statistical analysis tasks. This permits obtaining a ranked drug-event list, removing spurious entries and highlighting relationships with high risk potential.RESULTS: The EU-ADR Web Platform is an open workspace for the integrated analysis of pharmacovigilance datasets. Using this software, researchers can access a variety of tools provided by distinct partners in a single centralized environment. Besides performing standalone drug-event assessments, they can also control the pipeline for an improved batch analysis of custom datasets. Drug-event pairs can be substantiated and statistically analysed within the platform's innovative working environment.CONCLUSIONS: A pioneering workspace that helps in explaining the biological path of adverse drug reactions was developed within the EU-ADR project consortium. This tool, targeted at the pharmacovigilance community, is available online at https://bioinformatics.ua.pt/euadr/. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning Disability (LD) is a neurological condition that affects a child’s brain and impairs his ability to carry out one or many specific tasks. LD affects about 15 % of children enrolled in schools. The prediction of LD is a vital and intricate job. The aim of this paper is to design an effective and powerful tool, using the two intelligent methods viz., Artificial Neural Network and Adaptive Neuro-Fuzzy Inference System, for measuring the percentage of LD that affected in school-age children. In this study, we are proposing some soft computing methods in data preprocessing for improving the accuracy of the tool as well as the classifier. The data preprocessing is performed through Principal Component Analysis for attribute reduction and closest fit algorithm is used for imputing missing values. The main idea in developing the LD prediction tool is not only to predict the LD present in children but also to measure its percentage along with its class like low or minor or major. The system is implemented in Mathworks Software MatLab 7.10. The results obtained from this study have illustrated that the designed prediction system or tool is capable of measuring the LD effectively

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A combined theoretical and experimental study to elucidate the molecular mechanism for the Grob fragmentation of different (N-halo)-2-amino cyclocarboxylates with the nitrogen atom in exocyclic position: (N-Cl)-2-amino cyclopropanecarboxylate (1), (N-Cl)-2-amino cyclobutanecarboxylate (2), (N-Cl)-2-amino cyclopentanecarboxylate (3) and (N-Cl)-2-amino cyclohexanecarboxylate (4), and the corresponding acyclic compounds, (N-Cl)-2-amino isobutyric acid (A), (N-Cl)-2-amino butyric acid (B), has been carried out. The kinetics of decomposition for these compounds and related bromine derivatives were experimentally determined by conventional and stopped-flow UV spectrophotometry. The reaction products have been analyzed by GC and spectrophotometry. Theoretical analysis is based in the localization of stationary points (reactants and transition structures) on the potential energy surface. Calculations were carried out at B3LYP/6-31+G* and MP2/6-31+G* computing methods in the gas phase, while solvent effects have been included by means the self-consistent reaction field theory, PCM continuum model, at MP2/6-31+G* and MP4/6-31+G*//MP2/6-31+G* calculation levels. Based on both experimental and theoretical results, the different Grob fragmentation processes show a global synchronicity index close to 0.9, corresponding to a nearly concerted process. At the TSs, the N-Cl bond breaking is more advanced than the C-C cleavage process. An antiperiplanar configuration of these bonds is reached at the TSs, and this geometrical arrangement is the key factor governing the decomposition. In the case of 1 and 2 the ring strain prevents this spatial disposition, leading to a larger value of the activation barrier. Natural population analysis shows that the polarization of the N-Cl and C-C bonds along the bond-breaking process can be considered the driving force for the decomposition and that a negative charge flows from the carboxylate group to the chlorine atom to assist the reaction pathway. A comparison of theoretical and experimental results shows the relevance of calculation level and the inclusion of solvent effects for determining accurate unimolecular rate coefficients for the decomposition process. © 2002 Published by Elsevier Science B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Population growth is always increasing, and thus the concept of smart and cognitive cities is becoming more important. Developed countries are aware of and working towards needed changes in city management. However, emerging countries require the optimization of their own city management. This chapter illustrates, based on a use case, how a city in an emerging country can quickly progress using the concept of smart and cognitive cities. Nairobi, the capital of Kenya, is chosen for the test case. More than half of the population of Nairobi lives in slums with poor sanitation, and many slum inhabitants often share a single toilet, so the proper functioning and reliable maintenance of toilets are crucial. For this purpose, an approach for processing text messages based on cognitive computing (using soft computing methods) is introduced. Slum inhabitants can inform the responsible center via text messages in cases when toilets are not functioning properly. Through cognitive computer systems, the responsible center can fix the problem in a quick and efficient way by sending repair workers to the area. Focusing on the slum of Kibera, an easy-to-handle approach for slum inhabitants is presented, which can make the city more efficient, sustainable and resilient (i.e., cognitive).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a software prototype of a personal digital assistant 2.0. Based on soft computing methods and cognitive computing this mobile application prototype improves calendar and mobility management in cognitive cities. Applying fuzzy cognitive maps and evolutionary algorithms, the prototype represents a next step towards the realization of cognitive cities (i.e., smart cities enhanced with cognition). A user scenario and a test version of the prototype are included for didactical reasons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main focus of this thesis is to address the relative localization problem of a heterogenous team which comprises of both ground and micro aerial vehicle robots. This team configuration allows to combine the advantages of increased accessibility and better perspective provided by aerial robots with the higher computational and sensory resources provided by the ground agents, to realize a cooperative multi robotic system suitable for hostile autonomous missions. However, in such a scenario, the strict constraints in flight time, sensor pay load, and computational capability of micro aerial vehicles limits the practical applicability of popular map-based localization schemes for GPS denied navigation. Therefore, the resource limited aerial platforms of this team demand simpler localization means for autonomous navigation. Relative localization is the process of estimating the formation of a robot team using the acquired inter-robot relative measurements. This allows the team members to know their relative formation even without a global localization reference, such as GPS or a map. Thus a typical robot team would benefit from a relative localization service since it would allow the team to implement formation control, collision avoidance, and supervisory control tasks, independent of a global localization service. More importantly, a heterogenous team such as ground robots and computationally constrained aerial vehicles would benefit from a relative localization service since it provides the crucial localization information required for autonomous operation of the weaker agents. This enables less capable robots to assume supportive roles and contribute to the more powerful robots executing the mission. Hence this study proposes a relative localization-based approach for ground and micro aerial vehicle cooperation, and develops inter-robot measurement, filtering, and distributed computing modules, necessary to realize the system. The research study results in three significant contributions. First, the work designs and validates a novel inter-robot relative measurement hardware solution which has accuracy, range, and scalability characteristics, necessary for relative localization. Second, the research work performs an analysis and design of a novel nonlinear filtering method, which allows the implementation of relative localization modules and attitude reference filters on low cost devices with optimal tuning parameters. Third, this work designs and validates a novel distributed relative localization approach, which harnesses the distributed computing capability of the team to minimize communication requirements, achieve consistent estimation, and enable efficient data correspondence within the network. The work validates the complete relative localization-based system through multiple indoor experiments and numerical simulations. The relative localization based navigation concept with its sensing, filtering, and distributed computing methods introduced in this thesis complements system limitations of a ground and micro aerial vehicle team, and also targets hostile environmental conditions. Thus the work constitutes an essential step towards realizing autonomous navigation of heterogenous teams in real world applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantitative imaging in oncology aims at developing imaging biomarkers for diagnosis and prediction of cancer aggressiveness and therapy response before any morphological change become visible. This Thesis exploits Computed Tomography perfusion (CTp) and multiparametric Magnetic Resonance Imaging (mpMRI) for investigating diverse cancer features on different organs. I developed a voxel-based image analysis methodology in CTp and extended its use to mpMRI, for performing precise and accurate analyses at single-voxel level. This is expected to improve reproducibility of measurements and cancer mechanisms’ comprehension and clinical interpretability. CTp has not entered the clinical routine yet, although its usefulness in the monitoring of cancer angiogenesis, due to different perfusion computing methods yielding unreproducible results. Instead, machine learning applications in mpMRI, useful to detect imaging features representative of cancer heterogeneity, are mostly limited to clinical research, because of results’ variability and difficult interpretability, which make clinicians not confident in clinical applications. In hepatic CTp, I investigated whether, and under what conditions, two widely adopted perfusion methods, Maximum Slope (MS) and Deconvolution (DV), could yield reproducible parameters. To this end, I developed signal processing methods to model the first pass kinetics and remove any numerical cause hampering the reproducibility. In mpMRI, I proposed a new approach to extract local first-order features, aiming at preserving spatial reference and making their interpretation easier. In CTp, I found out the cause of MS and DV non-reproducibility: MS and DV represent two different states of the system. Transport delays invalidate MS assumptions and, by correcting MS formulation, I have obtained the voxel-based equivalence of the two methods. In mpMRI, the developed predictive models allowed (i) detecting rectal cancers responding to neoadjuvant chemoradiation showing, at pre-therapy, sparse coarse subregions with altered density, and (ii) predicting clinically significant prostate cancers stemming from the disproportion between high- and low- diffusivity gland components.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nel panorama aziendale odierno, risulta essere di fondamentale importanza la capacità, da parte di un’azienda o di una società di servizi, di orientare in modo programmatico la propria innovazione in modo tale da poter essere competitivi sul mercato. In molti casi, questo e significa investire una cospicua somma di denaro in progetti che andranno a migliorare aspetti essenziali del prodotto o del servizio e che avranno un importante impatto sulla trasformazione digitale dell’azienda. Lo studio che viene proposto riguarda in particolar modo due approcci che sono tipicamente in antitesi tra loro proprio per il fatto che si basano su due tipologie di dati differenti, i Big Data e i Thick Data. I due approcci sono rispettivamente il Data Science e il Design Thinking. Nel corso dei seguenti capitoli, dopo aver definito gli approcci di Design Thinking e Data Science, verrà definito il concetto di blending e la problematica che ruota attorno all’intersezione dei due metodi di innovazione. Per mettere in evidenza i diversi aspetti che riguardano la tematica, verranno riportati anche casi di aziende che hanno integrato i due approcci nei loro processi di innovazione, ottenendo importanti risultati. In particolar modo verrà riportato il lavoro di ricerca svolto dall’autore riguardo l'esame, la classificazione e l'analisi della letteratura esistente all'intersezione dell'innovazione guidata dai dati e dal pensiero progettuale. Infine viene riportato un caso aziendale che è stato condotto presso la realtà ospedaliero-sanitaria di Parma in cui, a fronte di una problematica relativa al rapporto tra clinici dell’ospedale e clinici del territorio, si è progettato un sistema innovativo attraverso l’utilizzo del Design Thinking. Inoltre, si cercherà di sviluppare un’analisi critica di tipo “what-if” al fine di elaborare un possibile scenario di integrazione di metodi o tecniche provenienti anche dal mondo del Data Science e applicarlo al caso studio in oggetto.