902 resultados para Supervisory Control and Data Acquisition (SCADA) Topology
Resumo:
This paper presents the results of our data mining study of Pb-Zn (lead-zinc) ore assay records from a mine enterprise in Bulgaria. We examined the dataset, cleaned outliers, visualized the data, and created dataset statistics. A Pb-Zn cluster data mining model was created for segmentation and prediction of Pb-Zn ore assay data. The Pb-Zn cluster data model consists of five clusters and DMX queries. We analyzed the Pb-Zn cluster content, size, structure, and characteristics. The set of the DMX queries allows for browsing and managing the clusters, as well as predicting ore assay records. A testing and validation of the Pb-Zn cluster data mining model was developed in order to show its reasonable accuracy before beingused in a production environment. The Pb-Zn cluster data mining model can be used for changes of the mine grinding and floatation processing parameters in almost real-time, which is important for the efficiency of the Pb-Zn ore beneficiation process. ACM Computing Classification System (1998): H.2.8, H.3.3.
Resumo:
This work was supported in part by the EU „2nd Generation Open Access Infrastructure for Research in Europe" (OpenAIRE+). The autumn training school Development and Promotion of Open Access to Scientific Information and Research is organized in the frame of the Fourth International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage—DiPP2014 (September 18–21, 2014, Veliko Tarnovo, Bulgaria, http://dipp2014.math.bas.bg/), organized under the UNESCO patronage. The main organiser is the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences with the support of EU project FOSTER (http://www.fosteropenscience.eu/) and the P. R. Slaveykov Regional Public Library in Veliko Tarnovo, Bulgaria.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
For wireless power transfer (WPT) systems, communication between the primary side and the pickup side is a challenge because of the large air gap and magnetic interferences. A novel method, which integrates bidirectional data communication into a high-power WPT system, is proposed in this paper. The power and data transfer share the same inductive link between coreless coils. Power/data frequency division multiplexing technique is applied, and the power and data are transmitted by employing different frequency carriers and controlled independently. The circuit model of the multiband system is provided to analyze the transmission gain of the communication channel, as well as the power delivery performance. The crosstalk interference between two carriers is discussed. In addition, the signal-to-noise ratios of the channels are also estimated, which gives a guideline for the design of mod/demod circuits. Finally, a 500-W WPT prototype has been built to demonstrate the effectiveness of the proposed WPT system.
Resumo:
Research Question/Issue - Which forms of state control over corporations have emerged in countries that made a transition from centrally-planned to marked-based economies and what are their implications for corporate governance? We assess the literature on variation and evolution of state control in transition economies, focusing on corporate governance of state-controlled firms. We highlight emerging trends and identify future research avenues. Research Findings/Insights - Based on our analysis of more than 100 articles in leading management, finance, and economics journals since 1989, we demonstrate how research on state control evolved from a polarized approach of public–private equity ownership comparison to studying a variety of constellations of state capitalism. Theoretical/Academic Implications - We identify theoretical perspectives that help us better understand benefits and costs associated with various forms of state control over firms. We encourage future studies to examine how context-specific factors determine the effect of state control on corporate governance. Practitioner/Policy Implications - Investors and policymakers should consider under which conditions investing in state-affiliated firms generates superior returns.
Resumo:
Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space
Resumo:
External partnerships play an important role in firms’ acquisition of the knowledge inputs to innovation. Such partnerships may be interactive – involving exploration and mutual learning by both parties – or non-interactive – involving exploitative activity and learning by only one party. Examples of non-interactive partnerships are copying or imitation. Here, we consider how firms’ innovation objectives influence their choice of interactive and/or non-interactive connections. We conduct a comparative analysis for the economies of Spain and the UK, which have contrasting innovation eco-systems and regulation burdens.
Resumo:
Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^
Resumo:
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be effective, it is important to include the visualization techniques in the mining process and to generate the discovered patterns for a more comprehensive visual view. In this dissertation, four related problems: dimensionality reduction for visualizing high dimensional datasets, visualization-based clustering evaluation, interactive document mining, and multiple clusterings exploration are studied to explore the integration of data mining and data visualization. In particular, we 1) propose an efficient feature selection method (reliefF + mRMR) for preprocessing high dimensional datasets; 2) present DClusterE to integrate cluster validation with user interaction and provide rich visualization tools for users to examine document clustering results from multiple perspectives; 3) design two interactive document summarization systems to involve users efforts and generate customized summaries from 2D sentence layouts; and 4) propose a new framework which organizes the different input clusterings into a hierarchical tree structure and allows for interactive exploration of multiple clustering solutions.
Resumo:
Purpose: To investigate to what degree the presence of hypertension (HTN) and poor glycemic control (GC) influences the likelihood of having microalbuminuria (MAU) among Cuban Americans with type 2 diabetes (T2D).Methods: A cross-sectional study conducted in Cuban Americans (n = 179) with T2D. Participants were recruited from a randomly generated mailing list purchased from KnowledgeBase Marketing, Inc. Blood pressure (BP) was measured twice and averaged using an adult size cuff. Glycosylated hemoglobin (A1c) levels were measured from whole blood samples with the Roche Tina-quant method. First morning urine samples were collected from each participant to determine MAU by a semiquantitative assay (ImmunoDip).Results: MAU was present in 26% of Cuban Americans with T2D. A significantly higher percentage of subjects with MA had HTN (P = 0.038) and elevated A1C (P = 0.002) than those with normoalbuminuria. Logistic regression analysis showed that after controlling for covariates, subjects with poor GC were 6.76 times more likely to have MAU if they had hypertension compared with those without hypertension (P = 0.004; 95% confidence interval [CI]: 1.83, 23.05). Conclusion: The clinical significance of these findings emphasizes the early detection of MAU in this Hispanic subgroup combined with BP and good GC, which are fundamentals in preventing and treating diabetes complications and improving individuals’ renal and cardiovascular outcomes.
Resumo:
In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.
Resumo:
Funded by UK Government's Overseas Territories Environmental Programme (OTEP)
Resumo:
C.-W.W. is supported by a studentship funded by the College of Physical Sciences, University of Aberdeen. M.S.B. acknowledges EPSRC grant NO. EP/I032606/1.