908 resultados para unknown-input estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of molecular markers for genomic studies in Mangifera indica (mango) will allow marker-assisted selection and identification of genetically diverse germplasm, greatly aiding mango breeding programs. We report here our identification of thousands of unambiguous molecular markers that can be easily assayed across genotypes of the species. With origin centered in Southeast Asia, mangos are grown throughout the tropics and subtropics as a nutritious fruit that exhibits remarkable intraspecific phenotypic diversity. With the goal of building a high density genetic map, we have undertaken discovery of sequence variation in expressed genes across a broad range of mango cultivars. A transcriptome sequence reference was built de novo from extensive sequencing and assembly of RNA from cultivar 'Tommy Atkins'. Single nucleotide polymorphisms (SNPs) in protein coding transcripts were determined from alignment of RNA reads from 24 mango cultivars of diverse origins: 'Amin Abrahimpur' (India), 'Aroemanis' (Indonesia), 'Burma' (Burma), 'CAC' (Hawaii), 'Duncan' (Florida), 'Edward' (Florida), 'Everbearing' (Florida), 'Gary' (Florida), 'Hodson' (Florida), 'Itamaraca' (Brazil), 'Jakarata' (Florida), 'Long' (Jamaica), 'M. Casturi Purple' (Borneo), 'Malindi' (Kenya), 'Mulgoba' (India), 'Neelum' (India), 'Peach' (unknown), 'Prieto' (Cuba), 'Sandersha' (India), 'Tete Nene' (Puerto Rico), 'Thai Everbearing' (Thailand), 'Toledo' (Cuba), 'Tommy Atkins' (Florida) and 'Turpentine' (West Indies). SNPs in a selected subset of protein coding transcripts are currently being converted into Fluidigm assays for genotyping of mapping populations and germplasm collections. Using an alternate approach, SNPs (144) discovered by sequencing of candidate genes in 'Kensington Pride' have already been converted and used for genotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitrogen (N) is an essential plant nutrient in maize production, and if considering only natural sources, is often the limiting factor world-wide in terms of a plant’s grain yield. For this reason, many farmers around the world supplement available soil N with synthetic man-made forms. Years of over-application of N fertilizer have led to increased N in groundwater and streams due to leaching and run-off from agricultural sites. In the Midwest Corn Belt much of this excess N eventually makes its way to the Gulf of Mexico leading to eutrophication (increase of phytoplankton) and a hypoxic (reduced oxygen) dead zone. Growing concerns about these types of problems and desire for greater input use efficiency have led to demand for crops with improved N use efficiency (NUE) to allow reduced N fertilizer application rates and subsequently lower N pollution. It is well known that roots are responsible for N uptake by plants, but it is relatively unknown how root architecture affects this ability. This research was conducted to better understand the influence of root complexity (RC) in maize on a plant’s response to N stress as well as the influence of RC on other above-ground plant traits. Thirty-one above-ground plant traits were measured for 64 recombinant inbred lines (RILs) from the intermated B73 & Mo17 (IBM) population and their backcrosses (BCs) to either parent, B73 and Mo17, under normal (182 kg N ha-1) and N deficient (0 kg N ha-1) conditions. The RILs were selected based on results from an earlier experiment by Novais et al. (2011) which screened 232 RILs from the IBM to obtain their root complexity measurements. The 64 selected RILs were comprised of 31 of the lowest complexity RILs (RC1) and 33 of the highest complexity RILs (RC2) in terms of root architecture (characterized as fractal dimensions). The use of the parental BCs classifies the experiment as Design III, an experimental design developed by Comstock and Robinson (1952) which allows for estimation of dominance significance and level. Of the 31 traits measured, 12 were whole plant traits chosen due to their documented response to N stress. The other 19 traits were ear traits commonly measured for their influence on yield. Results showed that genotypes from RC1 and RC2 significantly differ for several above-ground phenotypes. We also observed a difference in the number and magnitude of N treatment responses between the two RC classes. Differences in phenotypic trait correlations and their change in response to N were also observed between the RC classes. RC did not seem to have a strong correlation with calculated NUE (ΔYield/ΔN). Quantitative genetic analysis utilizing the Design III experimental design revealed significant dominance effects acting on several traits as well as changes in significance and dominance level between N treatments. Several QTL were mapped for 26 of the 31 traits and significant N effects were observed across the majority of the genome for some N stress indicative traits (e.g. stay-green). This research and related projects are essential to a better understanding of plant N uptake and metabolism. Understanding these processes is a necessary step in the progress towards the goal of breeding for better NUE crops.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, radars have been used in many applications such as precision agriculture and advanced driver assistant systems. Optimal techniques for the estimation of the number of targets and of their coordinates require solving multidimensional optimization problems entailing huge computational efforts. This has motivated the development of sub-optimal estimation techniques able to achieve good accuracy at a manageable computational cost. Another technical issue in advanced driver assistant systems is the tracking of multiple targets. Even if various filtering techniques have been developed, new efficient and robust algorithms for target tracking can be devised exploiting a probabilistic approach, based on the use of the factor graph and the sum-product algorithm. The two contributions provided by this dissertation are the investigation of the filtering and smoothing problems from a factor graph perspective and the development of efficient algorithms for two and three-dimensional radar imaging. Concerning the first contribution, a new factor graph for filtering is derived and the sum-product rule is applied to this graphical model; this allows to interpret known algorithms and to develop new filtering techniques. Then, a general method, based on graphical modelling, is proposed to derive filtering algorithms that involve a network of interconnected Bayesian filters. Finally, the proposed graphical approach is exploited to devise a new smoothing algorithm. Numerical results for dynamic systems evidence that our algorithms can achieve a better complexity-accuracy tradeoff and tracking capability than other techniques in the literature. Regarding radar imaging, various algorithms are developed for frequency modulated continuous wave radars; these algorithms rely on novel and efficient methods for the detection and estimation of multiple superimposed tones in noise. The accuracy achieved in the presence of multiple closely spaced targets is assessed on the basis of both synthetically generated data and of the measurements acquired through two commercial multiple-input multiple-output radars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with robust adaptive control and its applications, and it is divided into three main parts. The first part is about the design of robust estimation algorithms based on recursive least squares. First, we present an estimator for the frequencies of biased multi-harmonic signals, and then an algorithm for distributed estimation of an unknown parameter over a network of adaptive agents. In the second part of this thesis, we consider a cooperative control problem over uncertain networks of linear systems and Kuramoto systems, in which the agents have to track the reference generated by a leader exosystem. Since the reference signal is not available to each network node, novel distributed observers are designed so as to reconstruct the reference signal locally for each agent, and therefore decentralizing the problem. In the third and final part of this thesis, we consider robust estimation tasks for mobile robotics applications. In particular, we first consider the problem of slip estimation for agricultural tracked vehicles. Then, we consider a search and rescue application in which we need to drive an unmanned aerial vehicle as close as possible to the unknown (and to be estimated) position of a victim, who is buried under the snow after an avalanche event. In this thesis, robustness is intended as an input-to-state stability property of the proposed identifiers (sometimes referred to as adaptive laws), with respect to additive disturbances, and relative to a steady-state trajectory that is associated with a correct estimation of the unknown parameter to be found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to correlate the pre-operative imaging, vascularity of the proximal pole, and histology of the proximal pole bone of established scaphoid fracture non-union. This was a prospective non-controlled experimental study. Patients were evaluated pre-operatively for necrosis of the proximal scaphoid fragment by radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Vascular status of the proximal scaphoid was determined intra-operatively, demonstrating the presence or absence of puncate bone bleeding. Samples were harvested from the proximal scaphoid fragment and sent for pathological examination. We determined the association between the imaging and intra-operative examination and histological findings. We evaluated 19 male patients diagnosed with scaphoid nonunion. CT evaluation showed no correlation to scaphoid proximal fragment necrosis. MRI showed marked low signal intensity on T1-weighted images that confirmed the histological diagnosis of necrosis in the proximal scaphoid fragment in all patients. Intra-operative assessment showed that 90% of bones had absence of intra-operative puncate bone bleeding, which was confirmed necrosis by microscopic examination. In scaphoid nonunion MRI images with marked low signal intensity on T1-weighted images and the absence of intra-operative puncate bone bleeding are strong indicatives of osteonecrosis of the proximal fragment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a computer program developed for estimating penetrance rates in autosomal dominant diseases by means of family kinship and phenotype information contained within the pedigrees. The program also determines the exact 95% credibility interval for the penetrance estimate. Both executable (PenCalc for Windows) and web versions (PenCalcWeb) of the software are available. The web version enables further calculations, such as heterozygosity probabilities and assessment of offspring risks for all individuals in the pedigrees. Both programs can be accessed and down-loaded freely at the home-page address http://www.ib.usp.br/~otto/software.htm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that striation spacing may be related to the crack growth rate, da/dN, through Paris equation, as well as the maximum and minimum loads under service loading conditions. These loads define the load ratio, R, and are considered impossible to be evaluated from the inter-spacing striations analysis. In this way, this study discusses the methodology proposed by Furukawa to evaluate the maximum and minimum loads based on the experimental fact that the relative height of a striation, H, and the striation spacing, s, are strongly influenced by the load ratio, R. Fatigue tests in C(T) specimens were conducted on SAE 7475-T7351 Al alloy plates at room temperature and the results showed a straightforward correlation between the parameters H, s, and R. Measurements of striation height, H, were performed using scanning electron microscopy and field emission gun (FEG) after sectioning the specimen at a large inclined angle to amplify the height of the striations. The results showed that for increasing R the values of H/s tend to increase. Striation height, striation spacing, and load ratio correlations were obtained, which allows one to estimate service loadings from fatigue fracture surface survey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to compare REML/BLUP and Least Square procedures in the prediction and estimation of genetic parameters and breeding values in soybean progenies. F(2:3) and F(4:5) progenies were evaluated in the 2005/06 growing season and the F(2:4) and F(4:6) generations derived thereof were evaluated in 2006/07. These progenies were originated from two semi-early, experimental lines that differ in grain yield. The experiments were conducted in a lattice design and plots consisted of a 2 m row, spaced 0.5 m apart. The trait grain yield per plot was evaluated. It was observed that early selection is more efficient for the discrimination of the best lines from the F(4) generation onwards. No practical differences were observed between the least square and REML/BLUP procedures in the case of the models and simplifications for REML/BLUP used here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reverse engineering problem addressed in the present research consists of estimating the thicknesses and the optical constants of two thin films deposited on a transparent substrate using only transmittance data through the whole stack. No functional dispersion relation assumptions are made on the complex refractive index. Instead, minimal physical constraints are employed, as in previous works of some of the authors where only one film was considered in the retrieval algorithm. To our knowledge this is the first report on the retrieval of the optical constants and the thickness of multiple film structures using only transmittance data that does not make use of dispersion relations. The same methodology may be used if the available data correspond to normal reflectance. The software used in this work is freely available through the PUMA Project web page (http://www.ime.usp.br/similar to egbirgin/puma/). (C) 2008 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.