913 resultados para 0801 Artificial Intelligence and Image Processing
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.
Resumo:
This paper analyzes the inner relations between classical sub-scheme probability and statistic probability, subjective probability and objective probability, prior probability and posterior probability, transition probability and probability of utility, and further analysis the goal, method, and its practical economic purpose which represent by these various probability from the perspective of mathematics, so as to deeply understand there connotation and its relation with economic decision making, thus will pave the route for scientific predication and decision making.
Resumo:
Dyscalculia stands for a brain-based condition that makes it hard to make sense of numbers and mathematical concepts. Some adolescents with dyscalculia cannot grasp basic number concepts. They work hard to learn and memorize basic number facts. They may know what to do in mathematical classes but do not understand why they are doing it. In other words, they miss the logic behind it. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work focuses on the development of an Intelligent System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming framework to Knowledge Representation and Reasoning, complemented with a Case-Based problem solving approach to computing, that allows for the handling of incomplete, unknown, or even contradictory information.
Resumo:
Dyscalculia is usually perceived of as a specific learning difficulty for mathematics or, more appropriately, arithmetic. Because definitions and diagnoses of dyscalculia are in their infancy and sometimes are contradictory. However, mathematical learning difficulties are certainly not in their infancy and are very prevalent and often devastating in their impact. Co-occurrence of learning disorders appears to be the rule rather than the exception. Co-occurrence is generally assumed to be a consequence of risk factors that are shared between disorders, for example, working memory. However, it should not be assumed that all dyslexics have problems with mathematics, although the percentage may be very high, or that all dyscalculics have problems with reading and writing. Because mathematics is very developmental, any insecurity or uncertainty in early topics will impact on later topics, hence to need to take intervention back to basics. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work will focus on the development of a Decision Support System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming approach to Knowledge Representation and Reasoning, grounded on a Case-based approach to computing, that allows for the handling of incomplete, unknown, or even self-contradictory information.
Resumo:
Hematological cancers are a heterogeneous family of diseases that can be divided into leukemias, lymphomas, and myelomas, often called “liquid tumors”. Since they cannot be surgically removable, chemotherapy represents the mainstay of their treatment. However, it still faces several challenges like drug resistance and low response rate, and the need for new anticancer agents is compelling. The drug discovery process is long-term, costly, and prone to high failure rates. With the rapid expansion of biological and chemical "big data", some computational techniques such as machine learning tools have been increasingly employed to speed up and economize the whole process. Machine learning algorithms can create complex models with the aim to determine the biological activity of compounds against several targets, based on their chemical properties. These models are defined as multi-target Quantitative Structure-Activity Relationship (mt-QSAR) and can be used to virtually screen small and large chemical libraries for the identification of new molecules with anticancer activity. The aim of my Ph.D. project was to employ machine learning techniques to build an mt-QSAR classification model for the prediction of cytotoxic drugs simultaneously active against 43 hematological cancer cell lines. For this purpose, first, I constructed a large and diversified dataset of molecules extracted from the ChEMBL database. Then, I compared the performance of different ML classification algorithms, until Random Forest was identified as the one returning the best predictions. Finally, I used different approaches to maximize the performance of the model, which achieved an accuracy of 88% by correctly classifying 93% of inactive molecules and 72% of active molecules in a validation set. This model was further applied to the virtual screening of a small dataset of molecules tested in our laboratory, where it showed 100% accuracy in correctly classifying all molecules. This result is confirmed by our previous in vitro experiments.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
Radio Simultaneous Location and Mapping (SLAM) consists of the simultaneous tracking of the target and estimation of the surrounding environment, to build a map and estimate the target movements within it. It is an increasingly exploited technique for automotive applications, in order to improve the localization of obstacles and the target relative movement with respect to them, for emergency situations, for example when it is necessary to explore (with a drone or a robot) environments with a limited visibility, or for personal radar applications, thanks to its versatility and cheapness. Until today, these systems were based on light detection and ranging (lidar) or visual cameras, high-accuracy and expensive approaches that are limited to specific environments and weather conditions. Instead, in case of smoke, fog or simply darkness, radar-based systems can operate exactly in the same way. In this thesis activity, the Fourier-Mellin algorithm is analyzed and implemented, to verify the applicability to Radio SLAM, in which the radar frames can be treated as images and the radar motion between consecutive frames can be covered with registration. Furthermore, a simplified version of that algorithm is proposed, in order to solve the problems of the Fourier-Mellin algorithm when working with real radar images and improve the performance. The INRAS RBK2, a MIMO 2x16 mmWave radar, is used for experimental acquisitions, consisting of multiple tests performed in Lab-E of the Cesena Campus, University of Bologna. The different performances of Fourier-Mellin and its simplified version are compared also with the MatchScan algorithm, a classic algorithm for SLAM systems.
Resumo:
Solar radiation, especially ultraviolet A (UVA) and ultraviolet B (UVB), can cause damage to the human body, and exposure to the radiation may vary according to the geographical location, time of year and other factors. The effects of UVA and UVB radiation on organisms range from erythema formation, through tanning and reduced synthesis of macromolecules such as collagen and elastin, to carcinogenic DNA mutations. Some studies suggest that, in addition to the radiation emitted by the sun, artificial sources of radiation, such as commercial lamps, can also generate small amounts of UVA and UVB radiation. Depending on the source intensity and on the distance from the source, this radiation can be harmful to photosensitive individuals. In healthy subjects, the evidence on the danger of this radiation is still far from conclusive.
Resumo:
This study addressed the use of conventional and vegetable origin polyurethane foams to extract C. I. Acid Orange 61 dye. The quantitative determination of the residual dye was carried out with an UV/Vis absorption spectrophotometer. The extraction of the dye was found to depend on various factors such as pH of the solution, foam cell structure, contact time and dye and foam interactions. After 45 days, better results were obtained for conventional foam when compared to vegetable foam. Despite presenting a lower percentage of extraction, vegetable foam is advantageous as it is considered a polymer with biodegradable characteristics.
Resumo:
The present contribution explores the impact of the QUALIS metric system for academic evaluation implemented by CAPES (Coordination for the Development of Personnel in Higher Education) upon Brazilian Zoological research. The QUALIS system is based on the grouping and ranking of scientific journals according to their Impact Factor (IF). We examined two main points implied by this system, namely: 1) its reliability as a guideline for authors; 2) if Zoology possesses the same publication profile as Botany and Oceanography, three fields of knowledge grouped by CAPES under the subarea "BOZ" for purposes of evaluation. Additionally, we tested CAPES' recent suggestion that the area of Ecology would represent a fourth field of research compatible with the former three. Our results indicate that this system of classification is inappropriate as a guideline for publication improvement, with approximately one third of the journals changing their strata between years. We also demonstrate that the citation profile of Zoology is distinct from those of Botany and Oceanography. Finally, we show that Ecology shows an IF that is significantly different from those of Botany, Oceanography, and Zoology, and that grouping these fields together would be particularly detrimental to Zoology. We conclude that the use of only one parameter of analysis for the stratification of journals, i.e., the Impact Factor calculated for a comparatively small number of journals, fails to evaluate with accuracy the pattern of publication present in Zoology, Botany, and Oceanography. While such simplified procedure might appeals to our sense of objectivity, it dismisses any real attempt to evaluate with clarity the merit embedded in at least three very distinct aspects of scientific practice, namely: productivity, quality, and specificity.
Resumo:
This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.