998 resultados para Solution mining.
Resumo:
2,4,5-Triaryl-imidazoles are versatile compounds with application in medicine, due to their biological activity, and materials sciences, for their interesting optical properties. These properties can be tuned by careful selection of substituents at positions 2, 4 and 5: replacement of the aryl group by an heterocyclic group results in larger π-conjugated systems with improved optical properties for application in nonlinear optics, OLEDs, DNA intercalators, and chemosensors. Moreover, it is expected that introducing more conjugation and rigidity into the resulting system will further improve its properties. The development of chromo/fluorescent probes that are capable of detecting ions with high sensitivity and selectivity in aqueous media is currently a topic of strong interest and the design of heteroditopic receptors that contain two or more different binding sites for the simultaneous complexation of cationic and anionic guests is a emerging field of supramolecular chemistry. In this communication, we report the synthesis of new phenanthroimidazoles substituted at position 2 with arylthienyl or arylfuryl moieties possessing substituents of different electronic character, in order to tune the chromo/fluoro response in the presence of relevant anions and metal cations. Their photophysical properties and chemosensory ability were studied in acetonitrile and mixtures of acetonitrile and water, and selective detection of cyanide was achieved in aqueous mixtures for some of the derivatives.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.
Resumo:
telligence applications for the banking industry. Searches were performed in relevant journals resulting in 219 articles published between 2002 and 2013. To analyze such a large number of manuscripts, text mining techniques were used in pursuit for relevant terms on both business intelligence and banking domains. Moreover, the latent Dirichlet allocation modeling was used in or- der to group articles in several relevant topics. The analysis was conducted using a dictionary of terms belonging to both banking and business intelli- gence domains. Such procedure allowed for the identification of relationships between terms and topics grouping articles, enabling to emerge hypotheses regarding research directions. To confirm such hypotheses, relevant articles were collected and scrutinized, allowing to validate the text mining proce- dure. The results show that credit in banking is clearly the main application trend, particularly predicting risk and thus supporting credit approval or de- nial. There is also a relevant interest in bankruptcy and fraud prediction. Customer retention seems to be associated, although weakly, with targeting, justifying bank offers to reduce churn. In addition, a large number of ar- ticles focused more on business intelligence techniques and its applications, using the banking industry just for evaluation, thus, not clearly acclaiming for benefits in the banking business. By identifying these current research topics, this study also highlights opportunities for future research.
Resumo:
Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.
Resumo:
We are living in the era of Big Data. A time which is characterized by the continuous creation of vast amounts of data, originated from different sources, and with different formats. First, with the rise of the social networks and, more recently, with the advent of the Internet of Things (IoT), in which everyone and (eventually) everything is linked to the Internet, data with enormous potential for organizations is being continuously generated. In order to be more competitive, organizations want to access and explore all the richness that is present in those data. Indeed, Big Data is only as valuable as the insights organizations gather from it to make better decisions, which is the main goal of Business Intelligence. In this paper we describe an experiment in which data obtained from a NoSQL data source (database technology explicitly developed to deal with the specificities of Big Data) is used to feed a Business Intelligence solution.
Resumo:
Nowadays, organizations are increasingly looking to invest in business intelligence solutions, mainly private companies in order to get advantage over its competitors, however they do not know what is necessary. Business intelligence allows an analysis of consolidated information in order to obtain more specific outlets and certain indications in order to support the decision making process. You can take the right decision based on the data collected from different information systems present in the organization and outside of them. The textile sector is a sector where concept of Business Intelligence it is not many explored yet. Actually there are few textile companies that have a BI platform. Thus, the article objective is present an architecture and show all the steps by which companies need to spend to implement a successful free homemade Business Intelligence system. As result the proposed approach it was validated using real data aiming assess the steps defined.
Resumo:
In this work we perform a comparison of two different numerical schemes for the solution of the time-fractional diffusion equation with variable diffusion coefficient and a nonlinear source term. The two methods are the implicit numerical scheme presented in [M.L. Morgado, M. Rebelo, Numerical approximation of distributed order reaction- diffusion equations, Journal of Computational and Applied Mathematics 275 (2015) 216-227] that is adapted to our type of equation, and a colocation method where Chebyshev polynomials are used to reduce the fractional differential equation to a system of ordinary differential equations
Resumo:
Worldwide, around 9% of the children are born with less than 37 weeks of labour, causing risk to the premature child, whom it is not prepared to develop a number of basic functions that begin soon after the birth. In order to ensure that those risk pregnancies are being properly monitored by the obstetricians in time to avoid those problems, Data Mining (DM) models were induced in this study to predict preterm births in a real environment using data from 3376 patients (women) admitted in the maternal and perinatal care unit of Centro Hospitalar of Oporto. A sensitive metric to predict preterm deliveries was developed, assisting physicians in the decision-making process regarding the patients’ observation. It was possible to obtain promising results, achieving sensitivity and specificity values of 96% and 98%, respectively.
Resumo:
Lecture Notes in Computer Science, 9273
Resumo:
In Maternity Care, a quick decision has to be made about the most suitable delivery type for the current patient. Guidelines are followed by physicians to support that decision; however, those practice recommendations are limited and underused. In the last years, caesarean delivery has been pursued in over 28% of pregnancies, and other operative techniques regarding specific problems have also been excessively employed. This study identifies obstetric and pregnancy factors that can be used to predict the most appropriate delivery technique, through the induction of data mining models using real data gathered in the perinatal and maternal care unit of Centro Hospitalar of Oporto (CHP). Predicting the type of birth envisions high-quality services, increased safety and effectiveness of specific practices to help guide maternity care decisions and facilitate optimal outcomes in mother and child. In this work was possible to acquire good results, achieving sensitivity and specificity values of 90.11% and 80.05%, respectively, providing the CHP with a model capable of correctly identify caesarean sections and vaginal deliveries.
Resumo:
Rockburst is characterized by a violent explosion of a block causing a sudden rupture in the rock and is quite common in deep tunnels. It is critical to understand the phenomenon of rockburst, focusing on the patterns of occurrence so these events can be avoided and/or managed saving costs and possibly lives. The failure mechanism of rockburst needs to be better understood. Laboratory experiments are undergoing at the Laboratory for Geomechanics and Deep Underground Engineering (SKLGDUE) of Beijing and the system is described. A large number of rockburst tests were performed and their information collected, stored in a database and analyzed. Data Mining (DM) techniques were applied to the database in order to develop predictive models for the rockburst maximum stress (σRB) and rockburst risk index (IRB) that need the results of such tests to be determined. With the developed models it is possible to predict these parameters with high accuracy levels using data from the rock mass and specific project.
Resumo:
Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature.
Resumo:
Project Management involves onetime endeavors that demand for getting it right the first time. On the other hand, project scheduling, being one of the most modeled project management process stages, still faces a wide gap from theory to practice. Demanding computational models and their consequent call for simplification, divert the implementation of such models in project management tools from the actual day to day project management process. Special focus is being made to the robustness of the generated project schedules facing the omnipresence of uncertainty. An "easy" way out is to add, more or less cleverly calculated, time buffers that always result in project duration increase and correspondingly, in cost. A better approach to deal with uncertainty seems to be to explore slack that might be present in a given project schedule, a fortiori when a non-optimal schedule is used. The combination of such approach to recent advances in modeling resource allocation and scheduling techniques to cope with the increasing flexibility in resources, as can be expressed in "Flexible Resource Constraint Project Scheduling Problem" (FRCPSP) formulations, should be a promising line of research to generate more adequate project management tools. In reality, this approach has been frequently used, by project managers in an ad-hoc way.
Resumo:
For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.