937 resultados para modelli input-output programmazione lineare grafi pesati
Resumo:
An interfacing circuit for piezoresistive pressure sensors based on CMOS current conveyors is presented. The main advantages of the proposed interfacing circuit include the use of a single piezoresistor, the capability of offset compensation, and a versatile current-mode configuration, with current output and current or voltage input. Experimental tests confirm linear relation of output voltage versus piezoresistance variation.
Resumo:
Neural comparisons of bilateral sensory inputs are essential for visual depth perception and accurate localization of sounds in space. All animals, from single-cell prokaryotes to humans, orient themselves in response to environmental chemical stimuli, but the contribution of spatial integration of neural activity in olfaction remains unclear. We investigated this problem in Drosophila melanogaster larvae. Using high-resolution behavioral analysis, we studied the chemotaxis behavior of larvae with a single functional olfactory neuron on either the left or right side of the head, allowing us to examine unilateral or bilateral olfactory input. We developed new spectroscopic methods to create stable odorant gradients in which odor concentrations were experimentally measured. In these controlled environments, we observed that a single functional neuron provided sufficient information to permit larval chemotaxis. We found additional evidence that the overall accuracy of navigation is enhanced by the increase in the signal-to-noise ratio conferred by bilateral sensory input.
Resumo:
A sustainable management of soils with low natural fertility on family farms in the humid tropics is a great challenge and overcoming it would be an enormous benefit for the environment and the farmers. The objective of this study was to assess the environmental and agronomic benefits of alley cropping, based on the evaluation of C sequestration, soil quality indicators, and corn yields. Combinations of four legumes were used in alley cropping systems in the following treatments: Clitoria fairchildiana + Cajanus cajan; Acacia mangium + Cajanus cajan; Leucaena leucocephala + Cajanus cajan; Clitoria fairchildiana + Leucaena leucocephala; Leucaena leucocephala + Acacia mangium and a control. Corn was used as a cash crop. The C content was determined in the different compartments of soil organic matter, CEC, available P, base saturation, percentage of water saturation, the period of the root hospitality factor below the critical level and corn yield. It was concluded that alley cropping could substitute the slash and burn system in the humid tropics. The main environmental benefit of alley cropping is the maintenance of a dynamic equilibrium between C input and output that could sustain up to 10 Mg ha-1 of C in the litter layer, decreasing atmospheric CO2 levels. Alley cropping is also beneficial from the agricultural point of view, because it increases base saturation and decreases physical resistance to root penetration in the soil layer 0 - 10 cm, which ensures the increase and sustainability of corn yield.
Resumo:
Selostus: Rehun valkuais- ja energiapitoisuuden vaikutus sikojen typen hyväksikäyttöön, veden kulutukseen ja virtsan eritykseen
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
We analyze the consequences that the choice of the output of the system has in the efficiency of signal detection. It is shown that the output signal and the signal-to-noise ratio (SNR), used to characterize the phenomenon of stochastic resonance, strongly depend on the form of the output. In particular, the SNR may be enhanced for an adequate output.
Resumo:
The present research project was designed to determine thermal properties, such as coefficient of thermal expansion (CTE) and thermal conductivity, of Iowa concrete pavement materials. These properties are required as input values by the Mechanistic-Empirical Pavement Design Guide (MEPDG). In this project, a literature review was conducted to determine the factors that affect thermal properties of concrete and the existing prediction equations for CTE and thermal conductivity of concrete. CTE tests were performed on various lab and field samples of portland cement concrete (PCC) at the Iowa Department of Transportation and Iowa State University. The variations due to the test procedure, the equipment used, and the consistency of field batch materials were evaluated. The test results showed that the CTE variations due to test procedure and batch consistency were less than 5%, and the variation due to the different equipment was less than 15%. Concrete CTE values were significantly affected by different types of coarse aggregate. The CTE values of Iowa concrete made with limestone+graval, quartzite, dolomite, limestone+dolomite, and limestone were 7.27, 6.86, 6.68, 5.83, and 5.69 microstrain/oF (13.08, 12.35, 12.03, 10.50, and 10.25 microstrain/oC), respectively, which were all higher than the default value of 5.50 microstrain/oF in the MEPDG program. The thermal conductivity of a typical Iowa PCC mix and an asphalt cement concrete (ACC) mix (both with limestone as coarse aggregate) were tested at Concrete Technology Laboratory in Skokie, Illinois. The thermal conductivity was 0.77 Btu/hr•ft•oF (1.33 W/m•K) for PCC and 1.21 Btu/hr•ft•oF (2.09 W/m•K) for ACC, which are different from the default values (1.25 Btu/hr•ft•oF or 2.16 W/m•K for PCC and 0.67 Btu/hr•ft•oF or 1.16 W/m•K for ACC) in the MEPDG program. The investigations onto the CTE of ACC and the effects of concrete materials (such as cementitious material and aggregate types) and mix proportions on concrete thermal conductivity are recommended to be considered in future studies.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.
Resumo:
This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.