788 resultados para data mining applications
Resumo:
In recent years, studies into the reasons for dropping out of higher education (including online education) have been undertaken with greater regularity, parallel to the rise in the relative weight of this type of education, compared with brick-and-mortar education. However, the work invested in characterising the students who drop out of education, compared with those who do not, appears not to have had the same relevance as that invested in the analysis of the causes. The definition of dropping out is very sensitive to the context. In this article, we reach a purely empirical definition of student dropping out, based on the probability of not continuing a specific academic programme following several consecutive semesters of "theoretical break". Dropping out should be properly defined before analysing its causes, as well as comparing the drop-out rates between the different online programmes, or between online and on-campus ones. Our results show that there are significant differences among programmes, depending on their theoretical extension, but not their domain of knowledge.
Resumo:
This master's thesis coversthe concepts of knowledge discovery, data mining and technology forecasting methods in telecommunications. It covers the various aspects of knowledge discoveryin data bases and discusses in detail the methods of data mining and technologyforecasting methods that are used in telecommunications. Main concern in the overall process of this thesis is to emphasize the methods that are being used in technology forecasting for telecommunications and data mining. It tries to answer to some extent to the question of do forecasts create a future? It also describes few difficulties that arise in technology forecasting. This thesis was done as part of my master's studies in Lappeenranta University of Technology.
Resumo:
BACKGROUND: Selective publication of studies, which is commonly called publication bias, is widely recognized. Over the years a new nomenclature for other types of bias related to non-publication or distortion related to the dissemination of research findings has been developed. However, several of these different biases are often still summarized by the term 'publication bias'. METHODS/DESIGN: As part of the OPEN Project (To Overcome failure to Publish nEgative fiNdings) we will conduct a systematic review with the following objectives:- To systematically review highly cited articles that focus on non-publication of studies and to present the various definitions of biases related to the dissemination of research findings contained in the articles identified.- To develop and discuss a new framework on nomenclature of various aspects of distortion in the dissemination process that leads to public availability of research findings in an international group of experts in the context of the OPEN Project.We will systematically search Web of Knowledge for highly cited articles that provide a definition of biases related to the dissemination of research findings. A specifically designed data extraction form will be developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article.For the development of a new framework we will construct an initial table listing different levels and different hazards en route to making research findings public. An international group of experts will iteratively review the table and reflect on its content until no new insights emerge and consensus has been reached. DISCUSSION: Results are expected to be publicly available in mid-2013. This systematic review together with the results of other systematic reviews of the OPEN project will serve as a basis for the development of future policies and guidelines regarding the assessment and prevention of publication bias.
Resumo:
DDM is a framework that combines intelligent agents and artificial intelligence traditional algorithms such as classifiers. The central idea of this project is to create a multi-agent system that allows to compare different views into a single one.
Resumo:
Background: Current advances in genomics, proteomics and other areas of molecular biology make the identification and reconstruction of novel pathways an emerging area of great interest. One such class of pathways is involved in the biogenesis of Iron-Sulfur Clusters (ISC). Results: Our goal is the development of a new approach based on the use and combination of mathematical, theoretical and computational methods to identify the topology of a target network. In this approach, mathematical models play a central role for the evaluation of the alternative network structures that arise from literature data-mining, phylogenetic profiling, structural methods, and human curation. As a test case, we reconstruct the topology of the reaction and regulatory network for the mitochondrial ISC biogenesis pathway in S. cerevisiae. Predictions regarding how proteins act in ISC biogenesis are validated by comparison with published experimental results. For example, the predicted role of Arh1 and Yah1 and some of the interactions we predict for Grx5 both matches experimental evidence. A putative role for frataxin in directly regulating mitochondrial iron import is discarded from our analysis, which agrees with also published experimental results. Additionally, we propose a number of experiments for testing other predictions and further improve the identification of the network structure. Conclusion: We propose and apply an iterative in silico procedure for predictive reconstruction of the network topology of metabolic pathways. The procedure combines structural bioinformatics tools and mathematical modeling techniques that allow the reconstruction of biochemical networks. Using the Iron Sulfur cluster biogenesis in S. cerevisiae as a test case we indicate how this procedure can be used to analyze and validate the network model against experimental results. Critical evaluation of the obtained results through this procedure allows devising new wet lab experiments to confirm its predictions or provide alternative explanations for further improving the models.
Resumo:
Multicast is one method to transfer information in IPv4 based communication. Other methods are unicast and broadcast. Multicast is based on the group concept where data is sent from one point to a group of receivers and this remarkably saves bandwidth. Group members express an interest to receive data by using Internet Group Management Protocol and traffic is received by only those receivers who want it. The most common multicast applications are media streaming applications, surveillance applications and data collection applications. There are many data security methods to protect unicast communication that is the most common transfer method in Internet. Popular data security methods are encryption, authentication, access control and firewalls. The characteristics of multicast such as dynamic membership cause that all these data security mechanisms can not be used to protect multicast traffic. Nowadays the protection of multicast traffic is possible via traffic restrictions where traffic is allowed to propagate only to certain areas. One way to implement this is packet filters. Methods tested in this thesis are MVR, IGMP Filtering and access control lists which worked as supposed. These methods restrict the propagation of multicast but are laborious to configure in a large scale. There are also a few manufacturerspecific products that make possible to encrypt multicast traffic. These separate products are expensive and mainly intended to protect video transmissions via satellite. Investigation of multicast security has taken place for several years and the security methods that will be the results of the investigation are getting ready. An IETF working group called MSEC is standardizing these security methods. The target of this working group is to standardize data security protocols for multicast during 2004.
Resumo:
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
Resumo:
Peer-reviewed
Resumo:
Un árbol de decisión es una forma gráfica y analítica de representar todos los eventos (sucesos) que pueden surgir a partir de una decisión asumida en cierto momento. Nos ayudan a tomar la decisión más"acertada", desde un punto de vista probabilístico, ante un abanico de posibles decisiones. Estos árboles permiten examinar los resultados y determinar visualmente cómo fluye el modelo. Los resultados visuales ayudan a buscar subgrupos específicos y relaciones que tal vez no encontraríamos con estadísticos más tradicionales. Los árboles de decisión son una técnica estadística para la segmentación, la estratificación, la predicción, la reducción de datos y el filtrado de variables, la identificación de interacciones, la fusión de categorías y la discretización de variables continuas. La función árboles de decisión (Tree) en SPSS crea árboles de clasificación y de decisión para identificar grupos, descubrir las relaciones entre grupos y predecir eventos futuros. Existen diferentes tipos de árbol: CHAID, CHAID exhaustivo, CRT y QUEST, según el que mejor se ajuste a nuestros datos.
Resumo:
En els darrers vint anys la informació en línia ha esdevingut un factor decisiu per a l’activitat acadèmica i de recerca, i en conseqüència els recursos electrònics s’han anat “apropiant” progressivament d’una part cada vegada més important dels pressupostos de les biblioteques. La contractació dels recursos electrònics ha anat assumint una posició determinant en l’economia dels serveis bibliotecaris, a mesura que les publicacions en paper han anat perdent terreny davant les publicacions digitals. S’estima que les biblioteques universitàries italianes – malgrat no estar a l’avantguarda en aquest sector – inverteixen des de ja fa alguns anys més de la meitat dels seus pressupostos en l’adquisició de recursos electrònics. Com és sabut, el desenvolupament del mercat de la informació digital ha empès les biblioteques a associar-se en organitzacions i consorcis, fins i tot en aquells contextos tradicionalment reticents a la cooperació. El mètode cooperatiu es considera un element resolutiu dins el món de la informació electrònica i els consorcis són l’instrument organitzatiu més adient per tal que aquest enfocament sigui eficaç. En els darrers anys els consorcis han empès la seva iniciativa més enllà de les adquisicions i les negociacions de les llicències electròniques, per a invertir en els àmbits de l’accés obert, de la preservació digital, del data mining, de la gestió col·lectiva dels documents en paper, dels sistemes de gestió bibliotecària (ILS i eines de descoberta), de les plataformes d’accés, i molts altres. Més recentment ha sorgit una major disposició per part dels consorcis per a col·laborar amb altres organitzacions que treballen en diversos aspectes de l’àmbit de la comunicació científica i en la gestió i avaluació de la recerca (agències de finançament de la recerca, editorials, empreses de tecnologies de la informació, etc.) per tal de fer front a les noves necessitats de les biblioteques destinades a ampliar la seva intervenció més enllà del seu perímetre tradicional.
Resumo:
Bodily injury claims have the greatest impact on the claim costs of motor insurance companies. The disability severity of motor claims is assessed in numerous European countries by means of score systems. In this paper a zero inflated generalized Poisson regression model is implemented to estimate the disability severity score of victims in-volved in motor accidents on Spanish roads. We show that the injury severity estimates may be automatically converted into financial terms by insurers at any point of the claim handling process. As such, the methodology described may be used by motor insurers operating in the Spanish market to monitor the size of bodily injury claims. By using insurance data, various applications are presented in which the score estimate of disability severity is of value to insurers, either for computing the claim compensation or for claim reserve purposes.
Resumo:
The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.
Resumo:
Tämän tutkimuksen kohdeorganisaatio on suuren teollisuusyrityksen sisäinen raaka-aineen hankkija ja toimittaja. Tutkimuksessa selvitetään, mistä kohdeorganisaation hankinta-asiakkuuksien arvo muodostuu ja kuinka olemassa olevan liiketoimintadatan perusteella voidaan tutkia, arvioida ja luokitella kauppojen ja asiakkuuksien arvokkuutta aikaan sitomatta, objektiivisesti ja luotettavasti. Tutkimuksen teoriaosiossa esitellään lähestymistapoja ja menetelmiä, joiden avulla voidaan jalostaa olemassa olevasta datasta uutta sidosryhmätietämystä liiketoiminnan käyttöön, sekä tarkastellaan asiakaskannattavuusanalyysin, portfolioanalyysin, sekä asiakassegmentoinnin perusteita ja malleja. Näiden teorioiden ja mallien pohjalta rakennetaan kohdeorganisaatiolle räätälöity, indeksoituihin hinta-, määrä- ja kauppojen toistuvuus-muuttujiin perustuva, asiakkuuksien arvottamis- ja luokittelumalli. Arvottamis- ja luokittelumalli testataan vuosien 2003–2007 liiketoimintadatasta muodostetulla 389 336 kaupparivin otoksella, joka sisältää 42 186 arvioitavaa asiakkuussuhdetta. Merkittävin esille nouseva havainto on noin 5 000:n keskimääräistä selkeästi kalliimman asiakkuuden ryhmä. Aineisto ja sen poikkeavuudet testataan tilastollisin menetelmin, jotta saadaan selville asiakkuuden arvoon vaikuttavat ja arvoa selittävät tekijät. Lopuksi pohditaan arvottamismallin merkitystä analyyttisemman ostotoiminnan ja asiakkuudenhallinnan välineenä, sekä esitetään muutamia parannusehdotuksia.
Resumo:
La disciplina de l'Educational Data Mining and Learning Analytics té per objecte emprar els mètodes propis de la descoberta de coneixement en bases de dades i l'aprenentatge computacional amb la finalitat de comprendrei millorar, si s'escau, els processos que tenen lloc en entorns d'aprenentatge. En aquest estudi es parteix d'un registre d'establiment i clausura de sessions dels usuaris al Campus Virtual de la UOC per mirar d'obtenir resultats en aquesta direcció.
Resumo:
Recommender systems attempt to predict items in which a user might be interested, given some information about the user's and items' profiles. Most existing recommender systems use content-based or collaborative filtering methods or hybrid methods that combine both techniques (see the sidebar for more details). We created Informed Recommender to address the problem of using consumer opinion about products, expressed online in free-form text, to generate product recommendations. Informed recommender uses prioritized consumer product reviews to make recommendations. Using text-mining techniques, it maps each piece of each review comment automatically into an ontology