994 resultados para Data base


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Division of Criminal and Juvenile Justice Planning (CJJP) recently released its study of Iowa’s six adult drug courts, all of which are administered by community corrections agencies. Making heavy use of DOC’s ICON data base, CJJP examined completion rates, recidivism and substance abuse treatment. CJJP also compared drug court results with those of a group of offenders who were screened and declined or were rejected by drug court in 2003 (referred) and a sample of offenders starting probation in 2003 (probationers). CJJP tracked the offenders for approximately three years.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A large percentage of bridges in the state of Iowa are classified as structurally or fiinctionally deficient. These bridges annually compete for a share of Iowa's limited transportation budget. To avoid an increase in the number of deficient bridges, the state of Iowa decided to implement a comprehensive Bridge Management System (BMS) and selected the Pontis BMS software as a bridge management tool. This program will be used to provide a selection of maintenance, repair, and replacement strategies for the bridge networks to achieve an efficient and possibly optimal allocation of resources. The Pontis BMS software uses a new rating system to evaluate extensive and detailed inspection data gathered for all bridge elements. To manually collect these data would be a highly time-consuming job. The objective of this work was to develop an automated-computerized methodology for an integrated data base that includes the rating conditions as defined in the Pontis program. Several of the available techniques that can be used to capture inspection data were reviewed, and the most suitable method was selected. To accomplish the objectives of this work, two userfriendly programs were developed. One program is used in the field to collect inspection data following a step-by-step procedure without the need to refer to the Pontis user's manuals. The other program is used in the office to read the inspection data and prepare input files for the Pontis BMS software. These two programs require users to have very limited knowledge of computers. On-line help screens as well as options for preparing, viewing, and printing inspection reports are also available. The developed data collection software will improve and expedite the process of conducting bridge inspections and preparing the required input files for the Pontis program. In addition, it will eliminate the need for large storage areas and will simplify retrieval of inspection data. Furthermore, the approach developed herein will facilitate transferring these captured data electronically between offices within the Iowa DOT and across the state.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this project was to determine the feasibility of using pavement condition data collected for the Iowa Pavement Management Program (IPMP) as input to the Iowa Quadrennial Need Study. The need study, conducted by the Iowa Department of Transportation (Iowa DOT) every four years, currently uses manually collected highway infrastructure condition data (roughness, rutting, cracking, etc.). Because of the Iowa DOT's 10-year data collection cycles, condition data for a given highway segment may be up to 10 years old. In some cases, the need study process has resulted in wide fluctuations in funding allocated to individual Iowa counties from one study to the next. This volatility in funding levels makes it difficult for county engineers to plan and program road maintenance and improvements. One possible remedy is to input more current and less subjective infrastructure condition data. The IPMP was initially developed to satisfy the Intermodal Surface Transportation Efficiency Act (ISTEA) requirement that federal-aid-eligible highways be managed through a pavement management system. Currently all metropolitan planning organizations (MPOs) in Iowa and 15 of Iowa's 18 RPAs participate in the IPMP. The core of this program is a statewide data base of pavement condition and construction history information. The pavement data are collected by machine in two-year cycles. Using pilot areas, researchers examined the implications of using the automated data collected for the IPMP as input to the need study computer program, HWYNEEDS. The results show that using the IPMP automated data in HWYNEEDS is feasible and beneficial, resulting in less volatility in the level of total need between successive quadrennial need studies. In other words, the more current the data, the smaller the shift in total need.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Se ha desarrollado un portal de la Red de Investigación Educativa en Sonora, así como una base de datos conteniendo las investigaciones educativas realizadas en el Estado de Sonora, México, mediante las sugerencias de expertos de la región en investigación educativa y otros en lenguajes y sistemas de programación GNU o gratuito. Este portal está compuesto por una página principal programada en lenguaje PHP, dentro de la cual contiene distintos apartados en páginas secundarias, pero sobre todo, se adaptó la programación de una base de datos en lenguaje MySQL, que al igual que el lenguaje anterior es libre para efectos académicos no comerciales, esta base de datos ya contiene y en el futuro almacenará mas investigaciones y artículos educativos generados en Sonora, México. Se pagó el servicio de hospedaje del sistema en un servidor económico, libre, pero a la vez con suficiente capacidad.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

En esta presentación se expone la experiencia del Centro de Documentación e Información Educativa (CEDIE) de República Dominicana en el desarrollo de la base de datos de Resúmenes Analíticos (RAE) en educación dominicana, y cómo este trabajo ha beneficiado al sector educativo dominicano.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

En este documento se realiza una descripción de la producción científica colombiana en las disciplinas de medicina social y medicina básica para la base de datos Thompson ISI en el periodo comprendido entre 1975 y 2005. La caracterización de medicina social permite observar una baja producción de artículos internacionales de alto impacto propios de la disciplina, pues los artículos más citados siempre se encontraban clasificados en más de una disciplina; caso contrario al de medicina básica. De otra parte, se observa una alta concentración de los artículos de medicina social en publicaciones relacionadas con salud pública, siendo éstas una mayoría significativa dentro de la disciplina. A su vez, en las dos disciplinas es posible identificar una alta concentración de las citaciones en pocos artículos, siendo mayor en medicina social, reflejando un menor impacto promedio de las publicaciones en esta disciplina. Por último, los documentos más citados en medicina social se caracterizan por su interdisciplinariedad, colaboración internacional, interacción entre instituciones públicas y privadas, al igual que métodos no convencionales.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Variations in lake area and depth reflect climatically induced changes in the water balance of overflowing as well as closed lakes. A new global data base of lake status has been assembled, and is used to compare two simulations for 6 ka (6000 yr ago) made with successive R15 versions of the NCAR Community Climate Model (CCM). Simulated water balance was expressed as anomalies of annual precipitation minus evaporation (P-E); observed water balance as anomalies of lake status. Comparisons were made visually, by comparing regional averages, and by a statistic that compares the signs of simulated P-E anomalies (smoothly interpolated to the lake sites) with the status anomalies. Both CCM0 and CCM1 showed enhanced Northern-Hemisphere monsoons at 6 ka. Both underestimated the effect, but CCM1 fitted the spatial patterns better. In the northern mid- and high-latitudes the two versions differed more, and fitted the data less satisfactorily. CCM1 performed better than CCM0 in North America and central Eurasia, but not in Europe. Both models (especially CCM0) simulated excessive aridity in interior Eurasia. The models were systematically wrong in the southern mid-latitudes. Problems may have been caused by inadequate treatment of changes in sea-surface conditions in both models. Palaeolake status data will continue to provide a benchmark for the evaluation of modelling improvements.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Land cover plays a key role in global to regional monitoring and modeling because it affects and is being affected by climate change and thus became one of the essential variables for climate change studies. National and international organizations require timely and accurate land cover information for reporting and management actions. The North American Land Change Monitoring System (NALCMS) is an international cooperation of organizations and entities of Canada, the United States, and Mexico to map land cover change of North America's changing environment. This paper presents the methodology to derive the land cover map of Mexico for the year 2005 which was integrated in the NALCMS continental map. Based on a time series of 250 m Moderate Resolution Imaging Spectroradiometer (MODIS) data and an extensive sample data base the complexity of the Mexican landscape required a specific approach to reflect land cover heterogeneity. To estimate the proportion of each land cover class for every pixel several decision tree classifications were combined to obtain class membership maps which were finally converted to a discrete map accompanied by a confidence estimate. The map yielded an overall accuracy of 82.5% (Kappa of 0.79) for pixels with at least 50% map confidence (71.3% of the data). An additional assessment with 780 randomly stratified samples and primary and alternative calls in the reference data to account for ambiguity indicated 83.4% overall accuracy (Kappa of 0.80). A high agreement of 83.6% for all pixels and 92.6% for pixels with a map confidence of more than 50% was found for the comparison between the land cover maps of 2005 and 2006. Further wall-to-wall comparisons to related land cover maps resulted in 56.6% agreement with the MODIS land cover product and a congruence of 49.5 with Globcover.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a catalogue of galaxy photometric redshifts and k-corrections for the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7), available on the World Wide Web. The photometric redshifts were estimated with an artificial neural network using five ugriz bands, concentration indices and Petrosian radii in the g and r bands. We have explored our redshift estimates with different training sets, thus concluding that the best choice for improving redshift accuracy comprises the main galaxy sample (MGS), the luminous red galaxies and the galaxies of active galactic nuclei covering the redshift range 0 < z < 0.3. For the MGS, the photometric redshift estimates agree with the spectroscopic values within rms = 0.0227. The distribution of photometric redshifts derived in the range 0 < z(phot) < 0.6 agrees well with the model predictions. k-corrections were derived by calibration of the k-correct_v4.2 code results for the MGS with the reference-frame (z = 0.1) (g - r) colours. We adopt a linear dependence of k-corrections on redshift and (g - r) colours that provide suitable distributions of luminosity and colours for galaxies up to redshift z(phot) = 0.6 comparable to the results in the literature. Thus, our k-correction estimate procedure is a powerful, low computational time algorithm capable of reproducing suitable results that can be used for testing galaxy properties at intermediate redshifts using the large SDSS data base.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The need for a program of work focused on the nuclear data evaluation of chargedparticle reactions has arisen recently due to their increasing use in cancer therapy. This project, as part of that program, has as its main goal the selection and comparison of nuclear data for nuclear reactions induced by protons at low to intermediate energies (E < 250 MeV). The methodology of selection was based on the data base EXFOR and the compilations of radionuclide production cross sections of N. Sobolevsky. For the purpose of comparison and evaluation, theoretical calculations with the reaction model code EMPIRE-II are being used. © 2009 American Institute of Physics.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The objective of this research is to analyze the scientific production about „Metric Studies‟. It was used the data base BRAPCI (data base of articles and journals in Information Science) to outline the authors and institutions that produced more. As research procedure it was analyzed the articles with the keywords that represent the area of metric studies such as: Bibliometric, Cientometric, Informetric, Webmetric and Patentometric between 1991 and 2011. It was analyzed the variable in study, building the collaborative network through the software Pajek. From these results, it was possible to delineate the tendencies presented in the scientific community on this subject from the data base BRAPCI.