216 resultados para vector error correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: In animal models hemi-field deprivation results in localised, graded vitreous chamber elongation and presumably deprivation induced localised changes in retinal processing. The aim of this research was to determine if there are variations in ERG responses across the retina in normal chick eyes and to examine the effect of hemi-field and full-field deprivation on ERG responses across the retina and at earlier times than have previously been examined electrophysiologically. Methods: Chicks were either untreated, wore monocular full-diffusers or half-diffusers (depriving nasal retina) (n = 6-8 each group) from day 8. mfERG responses were measured using the VERIS mfERG system across the central 18.2º× 16.7º (H × V) field. The stimulus consisted of 61 unscaled hexagons with each hexagon modulated between black and white according to a pseudorandom binary m-sequence. The mfERG was measured on day 12 in untreated chicks, following 4 days of hemi-field diffuser wear, and 2, 48 and 96 h after application of full-field diffusers. Results: The ERG response of untreated chick eyes did not vary across the measured field; there was no effect of retinal location on the N1-P1 amplitude (p = 0.108) or on P1 implicit time (p > 0.05). This finding is consistent with retinal ganglion cell density of the chick varying by only a factor of two across the entire retina. Half-diffusers produced a ramped retina and a graded effect of negative lens correction (p < 0.0001); changes in retinal processing were localized. The untreated retina showed increasing complexity of the ERG waveform with development; form-deprivation prevented the increasing complexity of the response at the 2, 48 and 96 h measurement times and produced alterations in response timing. Conclusions: Form-deprivation and its concomitant loss of image contrast and high spatial frequency images prevented development of the ERG responses, consistent with a disruption of development of retinal feedback systems. The characterisation of ERG responses in normal and deprived chick eyes across the retina allows the assessment of concurrent visual and retinal manipulations in this model. (Ophthalmic & Physiological Optics © 2013 The College of Optometrists.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plants transformed with Agrobacterium frequently contain T-DNA concatamers with direct-repeat (d/r) or inverted-repeat (i/r) transgene integrations, and these repetitive T-DNA insertions are often associated with transgene silencing. To facilitate the selection of transgenic lines with simple T-DNA insertions, we constructed a binary vector (pSIV) based on the principle of hairpin RNA (hpRNA)-induced gene silencing. The vector is designed so that any transformed cells that contain more than one insertion per locus should generate hpRNA against the selective marker gene, leading to its silencing. These cells should, therefore, be sensitive to the selective agent and less likely to regenerate. Results from Arabidopsis and tobacco transformation showed that pSIV gave considerably fewer transgenic lines with repetitive insertions than did a conventional T-DNA vector (pCON). Furthermore, the transgene was more stably expressed in the pSIV plants than in the pCON plants. Rescue of plant DNA flanking sequences from pSIV plants was significantly more frequent than from pCON plants, suggesting that pSIV is potentially useful for T-DNA tagging. Our results revealed a perfect correlation between the presence of tail-to-tail inverted repeats and transgene silencing, supporting the view that read-through hpRNA transcript derived from i/r T-DNA insertions is a primary inducer of transgene silencing in plants. © CSIRO 2005.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have tested a methodology for the elimination of the selectable marker gene after Agrobacterium-mediated transformation of barley. This involves segregation of the selectable marker gene away from the gene of interest following co-transformation using a plasmid carrying two T-DNAs, which were located adjacent to each other with no intervening region. A standard binary transformation vector was modified by insertion of a small section composed of an additional left and right T-DNA border, so that the selectable marker gene and the site for insertion of the gene of interest (GOI) were each flanked by a left and right border. Using this vector three different GOIs were transformed into barley. Analysis of transgene inheritance was facilitated by a novel and rapid assay utilizing PCR amplification from macerated leaf tissue. Co-insertion was observed in two thirds of transformants, and among these approximately one quarter had transgene inserts which segregated in the next generation to yield selectable marker-free transgenic plants. Insertion of non-T-DNA plasmid sequences was observed in only one of fourteen SMF lines tested. This technique thus provides a workable system for generating transgenic barley free from selectable marker genes, thereby obviating public concerns regarding proliferation of these genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes techniques to improve the performance of i-vector based speaker verification systems when only short utterances are available. Short-length utterance i-vectors vary with speaker, session variations, and the phonetic content of the utterance. Well established methods such as linear discriminant analysis (LDA), source-normalized LDA (SN-LDA) and within-class covariance normalisation (WCCN) exist for compensating the session variation but we have identified the variability introduced by phonetic content due to utterance variation as an additional source of degradation when short-duration utterances are used. To compensate for utterance variations in short i-vector speaker verification systems using cosine similarity scoring (CSS), we have introduced a short utterance variance normalization (SUVN) technique and a short utterance variance (SUV) modelling approach at the i-vector feature level. A combination of SUVN with LDA and SN-LDA is proposed to compensate the session and utterance variations and is shown to provide improvement in performance over the traditional approach of using LDA and/or SN-LDA followed by WCCN. An alternative approach is also introduced using probabilistic linear discriminant analysis (PLDA) approach to directly model the SUV. The combination of SUVN, LDA and SN-LDA followed by SUV PLDA modelling provides an improvement over the baseline PLDA approach. We also show that for this combination of techniques, the utterance variation information needs to be artificially added to full-length i-vectors for PLDA modelling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scoliosis is a deformity of the spine which affects children and adolescents, and remains a challenge to treat. This study measured the forces used during surgery to correct scoliosis and studied changes to spinal mechanics from the implantation of metal rods used to hold the spine straight. The results of this study will help surgeons and engineers understand how to straighten the spine more efficiently to provide patients with better outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suppose two parties, holding vectors A = (a 1,a 2,...,a n ) and B = (b 1,b 2,...,b n ) respectively, wish to know whether a i  > b i for all i, without disclosing any private input. This problem is called the vector dominance problem, and is closely related to the well-studied problem for securely comparing two numbers (Yao’s millionaires problem). In this paper, we propose several protocols for this problem, which improve upon existing protocols on round complexity or communication/computation complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents an acoustic emission (AE) based fault diagnosis for low speed bearing using multi-class relevance vector machine (RVM). A low speed test rig was developed to simulate the various defects with shaft speeds as low as 10 rpm under several loading conditions. The data was acquired using anAEsensor with the test bearing operating at a constant loading (5 kN) andwith a speed range from20 to 80 rpm. This study is aimed at finding a reliable method/tool for low speed machines fault diagnosis based on AE signal. In the present study, component analysis was performed to extract the bearing feature and to reduce the dimensionality of original data feature. The result shows that multi-class RVM offers a promising approach for fault diagnosis of low speed machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binary Ti vectors are the plasmid vectors of choice in Agrobacterium-mediated plant transformation protocols. The pGreen series of binary Ti vectors are configured for ease-of-use and to meet the demands of a wide range of transformation procedures for many plant species. This plasmid system allows any arrangement of selectable marker and reporter gene at the right and left T-DNA borders without compromising the choice of restriction sites for cloning, since the pGreen cloning sites are based on the well-known pBluescript general vector plasmids. Its size and copy number in Escherichia coli offers increased efficiencies in routine in vitro recombination procedures. pGreen can replicate in Agrobacterium only if another plasmid, pSoup, is co-resident in the same strain. pSoup provides replication functions in trans for pGreen. The removal of RepA and Mob functions has enabled the size of pGreen to be kept to a minimum. Versions of pGreen have been used to transform several plant species with the same efficiencies as other binary Ti vectors. Information on the pGreen plasmid system is supplemented by an Internet site (http://www.pgreen.ac.uk) through which comprehensive information, protocols, order forms and lists of different pGreen marker gene permutations can be found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.