930 resultados para Input-output data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the Iowa TOPSpro Data Dictionary is to provide a statewide-standardized set of instructions and definitions for coding Tracking Of Programs And Students (TOPSpro) forms and effectively utilizing the TOPSpro software. This document is designed to serve as a companion document to the TOPS Technical Manual produced by the Comprehensive Adult Student Assessment System (CASAS). The data dictionary integrates information from various data systems to provide uniform data sets and definitions that meet local, state and federal reporting mandates. The sources for the data dictionary are: (1) the National Reporting System (NRS) Guidelines, (2) standard practices utilized in Iowa’s adult literacy program, (3) selected definitions from the Workforce Investment Act of 1998, (4) input from the state level Management Information System (MIS) personnel, and (5) selected definitions from other Iowa state agencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to estimate a translog stochastic frontier production function in the analysis of a panel of 150 mixed Catalan farms in the period 1989-1993, in order to attempt to measure and explain variation in technical inefficiency scores with a one-stage approach. The model uses gross value added as the output aggregate measure. Total employment, fixed capital, current assets, specific costs and overhead costs are introduced into the model as inputs. Stochasticfrontier estimates are compared with those obtained using a linear programming method using a two-stage approach. The specification of the translog stochastic frontier model appears as an appropriate representation of the data, technical change was rejected and the technical inefficiency effects were statistically significant. The mean technical efficiency in the period analyzed was estimated to be 64.0%. Farm inefficiency levels were found significantly at 5%level and positively correlated with the number of economic size units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines factors explaining subcontracting decisions in the construction industry. Rather than the more common cross-sectional analyses, we use panel data to evaluate the influence of all relevant variables. We design and use a new index of the closeness to small numbers situations to estimate the extent of hold-up problems. Results show that as specificity grows, firms tend to subcontract less. The opposite happens when output heterogeneity and the use of intangible assets and capabilities increase. Neither temporary shortage of capacity nor geographical dispersion of activities seem to affect the extent of subcontracting. Finally, proxies for uncertainty do not show any clear effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This contribution introduces Data Envelopment Analysis (DEA), a performance measurement technique. DEA helps decision makers for the following reasons: (1) By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement; (2) By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient; (3) By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimise the average total cost; (4) By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices. This contribution presents the essentials about DEA, alongside a case study to intuitively understand its application. It also introduces Win4DEAP, a software package that conducts efficiency analysis based on DEA methodology. The methodical background of DEA is presented for more demanding readers. Finally, four advanced topics of DEA are treated: adjustment to the environment, preferences, sensitivity analysis and time series data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. RESULTS: We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. CONCLUSIONS: Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the 'limma' method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large percentage of bridges in the state of Iowa are classified as structurally or fiinctionally deficient. These bridges annually compete for a share of Iowa's limited transportation budget. To avoid an increase in the number of deficient bridges, the state of Iowa decided to implement a comprehensive Bridge Management System (BMS) and selected the Pontis BMS software as a bridge management tool. This program will be used to provide a selection of maintenance, repair, and replacement strategies for the bridge networks to achieve an efficient and possibly optimal allocation of resources. The Pontis BMS software uses a new rating system to evaluate extensive and detailed inspection data gathered for all bridge elements. To manually collect these data would be a highly time-consuming job. The objective of this work was to develop an automated-computerized methodology for an integrated data base that includes the rating conditions as defined in the Pontis program. Several of the available techniques that can be used to capture inspection data were reviewed, and the most suitable method was selected. To accomplish the objectives of this work, two userfriendly programs were developed. One program is used in the field to collect inspection data following a step-by-step procedure without the need to refer to the Pontis user's manuals. The other program is used in the office to read the inspection data and prepare input files for the Pontis BMS software. These two programs require users to have very limited knowledge of computers. On-line help screens as well as options for preparing, viewing, and printing inspection reports are also available. The developed data collection software will improve and expedite the process of conducting bridge inspections and preparing the required input files for the Pontis program. In addition, it will eliminate the need for large storage areas and will simplify retrieval of inspection data. Furthermore, the approach developed herein will facilitate transferring these captured data electronically between offices within the Iowa DOT and across the state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study is to systematically evaluate the Iowa Department of Transportation’s (DOT’s) existing Pavement Management Information System (PMIS) with respect to the input information required for Mechanistic-Empirical Pavement Design Guide (MEPDG) rehabilitation analysis and design. To accomplish this objective, all of available PMIS data for interstate and primary roads in Iowa were retrieved from the Iowa DOT PMIS. The retrieved data were evaluated with respect to the input requirements and outputs for the latest version of the MEPDG software (version 1.0). The input parameters that are required for MEPDG HMA rehabilitation design, but currently unavailable in the Iowa DOT PMIS were identified. The differences in the specific measurement metrics used and their units for some of the pavement performance measures between the Iowa DOT PMIS and MEPDG were identified and discussed. Based on the results of this study, it is recommended that the Iowa DOT PMIS should be updated, if possible, to include the identified parameters that are currently unavailable, but are required for MEPDG rehabilitation design. Similarly, the measurement units of distress survey results in the Iowa DOT PMIS should be revised to correspond to those of MEPDG performance predictions. *******************Large File**************************

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Isotopic analyses on bulk carbonates are considered a useful tool for palaeoclimatic reconstruction assuming calcite precipitation occurring at oxygen isotope equilibrium with local water and detrital carbonate input being absent or insignificant. We present results from Lake Neuchatel (western Switzerland) that demonstrate equilibrium precipitation of calcite, except during high productivity periods, and the presence of detrital and resuspended calcite. Mineralogy, geochemistry and stable isotope values of Lake Neuchatel trap sediments and adjacent rivers suspension were studied. Mineralogy of suspended matter in the major inflowing rivers documents an important contribution of detrital carbonates, predominantly calcite with minor amounts of dolomite and ankerite. Using mineralogical data, the quantity of allochthonous calcite can be estimated by comparing the ratio ankerite + dolomite/calcite + ankerite + dolomite in the inflowing rivers and in the traps. Material taken from sediment traps shows an evolution from practically pure endogenic calcite in summer (10-20% detrital material) to higher percentages of detrital material in winter (up to 20-40%). Reflecting these mineralogical variations, delta(13)C and delta(18)O values of calcite from sediment traps are more negative in summer than in winter times. Since no significant variations in isotopic composition of lake water were detected over one year, factors controlling oxygen isotopic composition of calcite in sediment traps are the precipitation temperature, and the percentage of resuspended and detrital calcite. Samples taken close to the river inflow generally have higher delta values than the others, confirming detrital influence. SEM and isotopic studies on different size fractions (<2, 2-6, 6-20, 20-60, >60 mu m) of winter and summer samples allowed the recognition of resuspension and to separate new endogenic calcite from detrital calcite. Fractions >60 and (2 mu m have the highest percentage of detritus, Fractions 2-6 and 6-20 mu m are typical for the new endogenic calcite in summer, as given by calculations assuming isotopic equilibrium with local water. In winter such fractions show similar values than in summer, indicating resuspension. Using the isotopic composition of sediment traps material and of different size fractions, as well as the isotopic composition of lake water, the water temperature measurements and mineralogy, we re-evaluated the bulk carbonate potential for palaeoclimatic reconstruction in the presence of detrital and re-suspended calcite. This re-evaluation leads to the following conclusion: (1) the endogenic signal can be amplified by applying a particle-size separation, once the size of endogenic calcite is known from SEM study; (2) resuspended calcite does not alter the endogenic signal, but it lowers the time resolution; (3) detrital input decreases at increasing distances from the source, and it modifies the isotopic signal only when very abundant; (4) influence of detrital calcite on bulk sediment isotopic composition can be calculated. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this project was to determine the feasibility of using pavement condition data collected for the Iowa Pavement Management Program (IPMP) as input to the Iowa Quadrennial Need Study. The need study, conducted by the Iowa Department of Transportation (Iowa DOT) every four years, currently uses manually collected highway infrastructure condition data (roughness, rutting, cracking, etc.). Because of the Iowa DOT's 10-year data collection cycles, condition data for a given highway segment may be up to 10 years old. In some cases, the need study process has resulted in wide fluctuations in funding allocated to individual Iowa counties from one study to the next. This volatility in funding levels makes it difficult for county engineers to plan and program road maintenance and improvements. One possible remedy is to input more current and less subjective infrastructure condition data. The IPMP was initially developed to satisfy the Intermodal Surface Transportation Efficiency Act (ISTEA) requirement that federal-aid-eligible highways be managed through a pavement management system. Currently all metropolitan planning organizations (MPOs) in Iowa and 15 of Iowa's 18 RPAs participate in the IPMP. The core of this program is a statewide data base of pavement condition and construction history information. The pavement data are collected by machine in two-year cycles. Using pilot areas, researchers examined the implications of using the automated data collected for the IPMP as input to the need study computer program, HWYNEEDS. The results show that using the IPMP automated data in HWYNEEDS is feasible and beneficial, resulting in less volatility in the level of total need between successive quadrennial need studies. In other words, the more current the data, the smaller the shift in total need.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report describes the work accomplished to date on research project HR-173, A Computer Based Information System for County Equipment Cost Records, and presents the initial design for this system. The specific topics discussed here are findings from the analysis of information needs, the system specifications developed from these findings, and the proposed system design based upon the system specifications. The initial system design will include tentative input designs for capturing input data, output designs to show the output formats and the items to be output for use in decision making, file design showing the organization of information to be kept on each piece of equipment in the computer data file, and general system design explaining how the entire system will operate. The Steering Committee appointed by Iowa Highway Research Board is asked to study this report, make appropriate suggestions, and give approval to the proposed design subject to any suggestions made. This approval will permit the designer to proceed promptly with the development of the computer program implementation phase of the design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In French the adjective petit 'small, little' has a special status: it fulfills various pragmatic functions in addition to semantic meanings and it is thus highly frequent in discourse. Résumé: This study, based on the data of two children, aged 1;6 to 2;11, argues that petit and its pragmatic meanings play a specific role in the acquisition of French adjectives. In contrast to what is expected in child language, petit favours the early development of a pattern of noun phrase with prenominal attributive adjective. The emergence and distribution of petit in the children's production is examined and related to its distribution in the input, and the detailed pragmatic meanings and functions of petit are analysed. Prenominal petit emerges early as the preferred and most productive adjective. Pragmatic meanings of petit appear to be predominant in this early age and are of two main types: expressions of endearment (in noun phrases) and mitigating devices whose scope is the entire utterance. These results, as well as instances of children's pragmatic overgeneralizations, provide new evidence that at least some pragmatic meanings are prior to semantic meanings in early acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.