992 resultados para Multiple imputation prohibition
Resumo:
Background: The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped. ----- Methods: Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offsprings sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets. ----- Results: Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams. ----- Conclusions: Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited.
Resumo:
Das Mahafaly Plateau im sdwestlichen Madagaskar ist gekennzeichnet durch raue klimatische Bedingungen, vor allem regelmige Drren und Trockenperioden, geringe Infrastruktur, steigende Unsicherheit, hohe Analphabetenrate und regelmige Zerstrung der Ernte durch Heuschreckenplagen. Da 97% der Bevlkerung von der Landwirtschaft abhngen, ist eine Steigerung der Produktivitt von Anbausystemen die Grundlage fr eine Verbesserung der Lebensbedingungen und Ernhrungssicherheit in der Mahafaly Region. Da wenig ber die Produktivitt von traditionellen extensiven und neu eingefhrten Anbaumethoden in diesem Gebiet bekannt ist, waren die Zielsetzungen der vorliegenden Arbeit, die limitierenden Faktoren und vielversprechende alternative Anbaumethoden zu identifizieren und diese unter Feldbedingungen zu testen. Wir untersuchten die Auswirkungen von lokalem Viehmist und Holzkohle auf die Ertrge von Maniok, der Hauptanbaufrucht der Region, sowie die Beitrge von weiteren Faktoren, die im Untersuchungsgebiet ertragslimitierend sind. Darber hinaus wurde in der Kstenregion das Potenzial fr bewsserten Gemseanbau mit Mist und Holzkohle untersucht, um zu einer Diversifizierung von Einkommen und Ernhrung beizutragen. Ein weiterer Schwerpunkt dieser Arbeit war die Schtzung von Taubildung und deren Beitrag in der Jahreswasserbilanz durch Testen eines neu entworfenen Taumessgertes. Maniok wurde ber drei Jahre und in drei Versuchsfeldern in zwei Drfern auf dem Plateau angebaut, mit applizierten Zeburindermistraten von 5 und 10 t ha-1, Holzkohleraten von 0,5 und 2 t ha-1 und Maniokpflanzdichten von 4500 Pflanzen ha-1. Maniokknollenertrge auf Kontrollflchen erreichten 1 bis 1,8 t Trockenmasse (TM) ha-1. Mist fhrte zu einer Knollenertragssteigerung um 30 - 40% nach drei Jahren in einem kontinuierlich bewirtschafteten Feld mit geringer Bodenfruchtbarkeit, hatte aber keinen Effekt auf den anderen Versuchsfeldern. Holzkohle hatte keinen Einfluss auf Ertrge ber den gesamten Testzeitraum, whrend die Infektion mit Cassava-Mosaikvirus zu Ertragseinbuen um bis zu 30% fhrte. Pflanzenbestnde wurden felder-und jahresbergreifend um 4-54% des vollen Bestandes reduziert, was vermutlich auf das Auftreten von Trockenperioden und geringe Vitalitt von Pflanzmaterial zurckzufhren ist. Karotten (Daucus carota L. var. Nantaise) und Zwiebeln (Allium cepa L. var. Red Crole) wurden ber zwei Trockenzeiten mit lokal erhltlichem Saatgut angebaut. Wir testeten die Auswirkungen von lokalem Rindermist mit einer Rate von 40 t ha-1, Holzkohle mit einer Rate von 10 t ha-1, sowie Beschattung auf die Gemseernteertrge. Lokale Bewsserungswasser hatte einen Salzgehalt von 7,65 mS cm-1. Karotten- und Zwiebelertrge ber Behandlungen und Jahre erreichten 0,24 bis 2,56 t TM ha-1 beziehungsweise 0,30 bis 4,07 DM t ha-1. Mist und Holzkohle hatten keinen Einfluss auf die Ertrge beider Kulturen. Beschattung verringerte Karottenertrge um 33% im ersten Jahr, whrend sich die Ertrge im zweiten Jahr um 65% erhhten. Zwiebelertrge wurden unter Beschattung um 148% und 208% im ersten und zweiten Jahr erhht. Salines Bewsserungswasser sowie Qualitt des lokal verfgbaren Saatgutes reduzierten die Keimungsraten deutlich. Taubildung im Kstendorf Efoetsy betrug 58,4 mm und reprsentierte damit 19% der Niederschlagsmenge innerhalb des gesamten Beobachtungszeitraum von 18 Monaten. Dies weist darauf hin, dass Tau in der Tat einen wichtigen Beitrag zur jhrlichen Wasserbilanz darstellt. Tageshchstwerte erreichten 0,48 mm. Die getestete Tauwaage-Vorrichtung war in der Lage, die nchtliche Taubildung auf der metallischen Kondensationsplatte zuverlssig zu bestimmen. Im abschlieenden Kapitel werden die limitierenden Faktoren fr eine nachhaltige Intensivierung der Landwirtschaft in der Untersuchungsregion diskutiert.
Resumo:
A new formulation for recovering the structure and motion parameters of a moving patch using both motion and shading information is presented. It is based on a new differential constraint equation (FICE) that links the spatiotemporal gradients of irradiance to the motion and structure parameters and the temporal variations of the surface shading. The FICE separates the contribution to the irradiance spatiotemporal gradients of the gradients due to texture from those due to shading and allows the FICE to be used for textured and textureless surface. The new approach, combining motion and shading information, leads directly to two different contributions: it can compensate for the effects of shading variations in recovering the shape and motion; and it can exploit the shading/illumination effects to recover motion and shape when they cannot be recovered without it. The FICE formulation is also extended to multiple frames.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
To recognize a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. Developments in computer vision suggest that it may be possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Daily life situations, however, typically require categorization, rather than recognition, of objects. Due to the open-ended character both of natural kinds and of artificial categories, categorization cannot rely on interpolation between stored examples. Nonetheless, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes appears to be computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.
Resumo:
We discuss the problem of finding sparse representations of a class of signals. We formalize the problem and prove it is NP-complete both in the case of a single signal and that of multiple ones. Next we develop a simple approximation method to the problem and we show experimental results using artificially generated signals. Furthermore,we use our approximation method to find sparse representations of classes of real signals, specifically of images of pedestrians. We discuss the relation between our formulation of the sparsity problem and the problem of finding representations of objects that are compact and appropriate for detection and classification.
Resumo:
We address the problem of jointly determining shipment planning and scheduling decisions with the presence of multiple shipment modes. We consider long lead time, less expensive sea shipment mode, and short lead time but expensive air shipment modes. Existing research on multiple shipment modes largely address the short term scheduling decisions only. Motivated by an industrial problem where planning decisions are independent of the scheduling decisions, we investigate the benefits of integrating the two sets of decisions. We develop sequence of mathematical models to address the planning and scheduling decisions. Preliminary computational results indicate improved performance of the integrated approach over some of the existing policies used in real-life situations.
Resumo:
compositions is a new R-package for the analysis of compositional and positive data. It contains four classes corresponding to the four different types of compositional and positive geometry (including the Aitchison geometry). It provides means for computation, plotting and high-level multivariate statistical analysis in all four geometries. These geometries are treated in an fully analogous way, based on the principle of working in coordinates, and the object-oriented programming paradigm of R. In this way, called functions automatically select the most appropriate type of analysis as a function of the geometry. The graphical capabilities include ternary diagrams and tetrahedrons, various compositional plots (boxplots, barplots, piecharts) and extensive graphical tools for principal components. Afterwards, ortion and proportion lines, straight lines and ellipses in all geometries can be added to plots. The package is accompanied by a hands-on-introduction, documentation for every function, demos of the graphical capabilities and plenty of usage examples. It allows direct and parallel computation in all four vector spaces and provides the beginner with a copy-and-paste style of data analysis, while letting advanced users keep the functionality and customizability they demand of R, as well as all necessary tools to add own analysis routines. A complete example is included in the appendix
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult to achieve because the relative values of the forecast components often fail to behave in a way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It has been shown that cause-specic mortality forecasts are pessimistic when compared with all-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approach of using log mortality rates and forecasts the density of deaths in the life table. Since these values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbing state), they are intrinsically relative rather than absolute values across decrements as well as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison (1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that the unit sum constraint is honoured. The structure of the best-known, single-decrement mortality-rate forecasting model, devised by Lee and Carter (1992), is expressed in compositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortality by cause of death for Japan
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Resumen tomado de la publicaci??n
Resumo:
This paper overviews the field of graphical simulators used for AUV development, presents the taxonomy of these applications and proposes a classification. It also presents Neptune, a multivehicle, real-time, graphical simulator based on OpenGL that allows hardware in the loop simulations
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
Exercises and solutions on double integration. Diagrams for the questions are all together in the support.zip file, as .eps files