865 resultados para NETWORK-BASED METHODS
Resumo:
Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm.
Resumo:
Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.
Resumo:
PURPOSE OF REVIEW: Invasive candidiasis is a severe infectious complication occurring mostly in onco-hematologic and surgical patients. Its conventional diagnosis is insensitive and often late, leading to a delayed treatment and a high mortality. The purpose of this article is to review recent contributions in the nonconventional diagnostic approaches of invasive candidiasis, both for the detection of the epidose and the characterization of the etiologic agent. RECENT FINDINGS: Antigen-based tests to detect invasive candidiasis comprise a specific test, mannan, as well as a nonspecific test, beta-D-glucan. Both have a moderate sensitivity and a high specificity, and cannot be recommended alone as a negative screening tool or a positive syndrome driven diagnostic tool. Molecular-based tests still have not reached the stage of rapid, easy to use, standardized tests ideally complementing blood culture at the time of blood sampling. New tests (fluorescence in-situ hybridization or mass spectrometry) significantly reduce the delay of identification of Candida at the species level in positive blood cultures, and should have a positive impact on earlier appropriate antifungal therapy and possibly on outcome. SUMMARY: Both antigen-based and molecular tests appear as promising new tools to complement and accelerate the conventional diagnosis of invasive candidiasis with an expected significant impact on earlier and more focused treatment and on prognosis.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
Induction motors are largely used in several industry sectors. The selection of an induction motor has still been inaccurate because in most of the cases the load behavior in its shaft is completely unknown. The proposal of this article is to use artificial neural networks for torque estimation with the purpose of best selecting the induction motors rather than conventional methods, which use classical identification techniques and mechanical load modeling. Since proposed approach estimates the torque behavior from the transient to the steady state, one of its main contributions is the potential to also be implemented in control schemes for real-time applications. Simulation results are also presented to validate the proposed approach.
Resumo:
This work presents a methodology to analyze transient stability for electric energy systems using artificial neural networks based on fuzzy ARTMAP architecture. This architecture seeks exploring similarity with computational concepts on fuzzy set theory and ART (Adaptive Resonance Theory) neural network. The ART architectures show plasticity and stability characteristics, which are essential qualities to provide the training and to execute the analysis. Therefore, it is used a very fast training, when compared to the conventional backpropagation algorithm formulation. Consequently, the analysis becomes more competitive, compared to the principal methods found in the specialized literature. Results considering a system composed of 45 buses, 72 transmission lines and 10 synchronous machines are presented. © 2003 IEEE.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
Single photon emission with computed tomography (SPECT) hexamethylphenylethyleneamineoxime technetium-99 images were analyzed by an optimal interpolative neural network (OINN) algorithm to determine whether the network could discriminate among clinically diagnosed groups of elderly normal, Alzheimer disease (AD), and vascular dementia (VD) subjects. After initial image preprocessing and registration, image features were obtained that were representative of the mean regional tissue uptake. These features were extracted from a given image by averaging the intensities over various regions defined by suitable masks. After training, the network classified independent trials of patients whose clinical diagnoses conformed to published criteria for probable AD or probable/possible VD. For the SPECT data used in the current tests, the OINN agreement was 80 and 86% for probable AD and probable/possible VD, respectively. These results suggest that artificial neural network methods offer potential in diagnoses from brain images and possibly in other areas of scientific research where complex patterns of data may have scientifically meaningful groupings that are not easily identifiable by the researcher.
Resumo:
The authors propose a new approach to discourse analysis which is based on meta data from social networking behavior of learners who are submerged in a socially constructivist e-learning environment. It is shown that traditional data modeling techniques can be combined with social network analysis - an approach that promises to yield new insights into the largely uncharted domain of network-based discourse analysis. The chapter is treated as a non-technical introduction and is illustrated with real examples, visual representations, and empirical findings. Within the setting of a constructivist statistics course, the chapter provides an illustration of what network-based discourse analysis is about (mainly from a methodological point of view), how it is implemented in practice, and why it is relevant for researchers and educators.
Resumo:
This paper investigates neural network-based probabilistic decision support system to assess drivers' knowledge for the objective of developing a renewal policy of driving licences. The probabilistic model correlates drivers' demographic data to their results in a simulated written driving exam (SWDE). The probabilistic decision support system classifies drivers' into two groups of passing and failing a SWDE. Knowledge assessment of drivers within a probabilistic framework allows quantifying and incorporating uncertainty information into the decision-making system. The results obtained in a Jordanian case study indicate that the performance of the probabilistic decision support systems is more reliable than conventional deterministic decision support systems. Implications of the proposed probabilistic decision support systems on the renewing of the driving licences decision and the possibility of including extra assessment methods are discussed.
Resumo:
A minőségügy egyik kulcsfeladata, hogy azonosítsa az értékteremtés szempontjából kritikus tényezőket, meghatározza ezek értékét, valamint intézkedjen negatív hatásuk megelőzése és csökkentése érdekében. Az értékteremtés sok esetben folyamatokon keresztül történik, amelyek tevékenységekből, elvégzendő feladatokból állnak. Ezekhez megfelelő munkatársak kellenek, akiknek az egyik legfontosabb jellemzője az általuk birtokolt tudás. Mindezek alapján a feladat-tudás-erőforrás kapcsolatrendszer ismerete és kezelése minőségügyi feladat is. A komplex rendszerek elemzésével foglalkozó hálózatkutatás eszközt biztosíthat ehhez, ezért indokolt a minőségügyi területen történő alkalmazhatóságának vizsgálata. Az alkalmazási lehetőségek rendszerezése érdekében a szerzők kategorizálták a minőségügyi hálózatokat az élek (kapcsolatok) és a csúcsok (hálózati pontok) típusai alapján. Ezt követően definiálták a multimodális (több különböző csúcstípusból álló) tudáshálózatot, amely a feladatokból, az erőforrásokból, a tudáselemekből és a közöttük lévő kapcsolatokból épül fel. A hálózat segítségével kategóriákba sorolták a tudáselemeket, valamint a fokszámok alapján meghatározták értéküket. A multimodális hálózatból képzett tudáselem-hálózatban megadták az összefüggő csoportok jelentését, majd megfogalmaztak egy összefüggést a tudáselem-elvesztés kockázatának meghatározására. _______ The aims of quality management are to identify those factors that have significant influence on value production, qualify or quantify them, and make preventive and corrective actions in order to reduce their negative effects. The core elements of value production are processes and tasks, along with workforce having the necessary knowledge to work. For that reason the task-resource-knowledge structure is pertinent to quality management. Network science provides methods to analyze complex systems; therefore it seems reasonable to study the use of tools of network analysis in association with quality management issues. First of all the authors categorized quality networks according to the types of nodes (vertices) and links (edges or arcs). Focusing on knowledge management, they defined the multimodal knowledge network, consisting of tasks, resources, knowledge items and their interconnections. Based on their degree, network nodes can be categorized and their value can be quantified. Derived from the multimodal network knowledge-item network is to be created, where the meaning of cohesive subgroups is defined. Eventually they proposed a formula for determining the risk of knowledge loss.
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
Recent years observed massive growth in wearable technology, everything can be smart: phones, watches, glasses, shirts, etc. These technologies are prevalent in various fields: from wellness/sports/fitness to the healthcare domain. The spread of this phenomenon led the World-Health-Organization to define the term 'mHealth' as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices". Furthermore, mHealth solutions are suitable to perform real-time wearable Biofeedback (BF) systems: sensors in the body area network connected to a processing unit (smartphone) and a feedback device (loudspeaker) to measure human functions and return them to the user as (bio)feedback signal. During the COVID-19 pandemic, this transformation of the healthcare system has been dramatically accelerated by new clinical demands, including the need to prevent hospital surges and to assure continuity of clinical care services, allowing pervasive healthcare. Never as of today, we can say that the integration of mHealth technologies will be the basis of this new era of clinical practice. In this scenario, this PhD thesis's primary goal is to investigate new and innovative mHealth solutions for the Assessment and Rehabilitation of different neuromotor functions and diseases. For the clinical assessment, there is the need to overcome the limitations of subjective clinical scales. Creating new pervasive and self-administrable mHealth solutions, this thesis investigates the possibility of employing innovative systems for objective clinical evaluation. For rehabilitation, we explored the clinical feasibility and effectiveness of mHealth systems. In particular, we developed innovative mHealth solutions with BF capability to allow tailored rehabilitation. The main goal that a mHealth-system should have is improving the person's quality of life, increasing or maintaining his autonomy and independence. To this end, inclusive design principles might be crucial, next to the technical and technological ones, to improve mHealth-systems usability.