990 resultados para Dimensional Accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyze microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods can not be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes), which are most discriminative to classify samples in different classes, to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The existence of exons and introns has been known for thirty years. Despite this knowledge, there is a lack of formal research into the categorization of exons. Exon taxonomies used by researchers tend to be selected ad hoc or based on an information poor de-facto standard. Exons have been shown to have specific properties and functions based on among other things their location and order. These factors should play a role in the naming to increase specificity about which exon type(s) are in question.

Results: POEM (Protein Oriented Exon Monikers) is a new taxonomy focused on protein proximal exons. It integrates three dimensions of information (Global Position, Regional Position and Region), thus its exon categories are based on known statistical exon features. POEM is applied to two congruent untranslated exon datasets resulting in the following statistical properties. Using the POEM taxonomy previous wide ranging estimates of initial 5' untranslated region exons are resolved. According to our datasets, 29–36% of genes have wholly untranslated first exons. Untranslated exon containing sequences are shown to have consistently up to 6 times more 5' untranslated exons than 3' untranslated exons. Finally, three exon patterns are determined which account for 70% of untranslated exon genes.

Conclusion: We describe a thorough three-dimensional exon taxonomy called POEM, which is biologically and statistically relevant. No previous taxonomy provides such fine grained information and yet still includes all valid information dimensions. The use of POEM will improve the accuracy of genefinder comparisons and analysis by means of a common taxonomy. It will also facilitate unambiguous communication due to its fine granularity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyse microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods cannot be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes) to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-dimensional Principal Component Analysis (2DPCA) is a robust method in face recognition. Much recent research shows that the 2DPCA is more reliable than the well-known PCA method in recognising human face. However, in many cases, this method tends to be overfitted to sample data. In this paper, we proposed a novel method named random subspace two-dimensional PCA (RS-2DPCA), which combines the 2DPCA method with the random subspace (RS) technique. The RS-2DPCA inherits the advantages of both the 2DPCA and RS technique, thus it can avoid the overfitting problem and achieve high recognition accuracy. Experimental results in three benchmark face data sets -the ORL database, the Yale face database and the extended Yale face database B - confirm our hypothesis that the RS-2DPCA is superior to the 2DPCA itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision edge feature extraction is a very important step in vision, Researchers mainly use step edges to model an edge at subpixel level. In this paper we describe a new technique for two dimensional edge feature extraction to subpixel accuracy using a general edge model. Using six basic edge types to model edges, the edge parameters at subpixel level are extracted by fitting a model to the image signal using least-.squared error fitting technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective:This study investigated the efficacy of different techniques for the union of fragments of a denture before repair and on the accuracy of the reposition.Materials and methods:For this study, 20 maxillary dentures made with Lucitone 550 heat-cured resin were used. Points were determined with a scanner on the cusp of the teeth, as a measurement of the segments. After digitisation, each model was exported to the AUTOCAD R 14 program and two-dimensional measurements of the distances between the marked points were made. After the initial analysis, the dentures were fractured into two segments using an impact test machine. For the repair, maxillary dentures were divided into two groups; in the first, the repair was carried out using Kerr's sticky wax and in the second group, Super Bonder was used to join the fragments, with subsequent inclusion of DENTSPLY((R)) Repair Material resin. After the repair, the points of the maxillary dentures were measured again. The numerical values obtained were tabulated to compare the measurements before fracture and after the repair. For statistical analysis, analysis of variance was employed, using a single factor and double factor, followed by the Tukey test with a reliability of 95%.Results:The results demonstrated a statistically significant difference between the materials used to join the dentures for repair, where the dentures were joined with sticky wax presented a larger variation in the distances between the points.Conclusion:The variation in distances between the points is influenced by the agent of repair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have used the adiabatic hyperspherical approach to determine the energies and wave functions of the ground state and first excited states of a two-dimensional D- ion in the presence of a magnetic field. Using a modified hyperspherical angular variable, potential energy curves are analytically obtained, allowing an accurate determination of the energy levels of this system. Upper and lower bounds for the ground-state energy have been determined by a non-adiabatic procedure, as the purpose is to improve the accuracy of method. The results are shown to be comparable to the best variational calculations reported in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection and use of hard chairside reline resins must be made with regard to dimensional stability, which will influence the accuracy of fit of the denture base. This study compared the dimensional change of two hard chairside reline resins (Duraliner II and Kooliner) and one heat-curing denture base resin (Lucitone 550). A stainless steel mold with reference dimensions (AB, CD) was used to obtain the samples. The materials were processed according to the manufacturer's recommendations. Measurements of the dimensions were made after processing and after the samples had been stored in distilled water at 37° C for eight different periods of time. The data were recorded and then analyzed with analysis of variance. All materials showed shrinkage immediately after processing (p < 0.05). The only resin that exhibited shrinkage after 60 days of storage in water was Duraliner II; these changes could be clinically significant in regard of tissue fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the theoretical and experimental results for oxide thin film growth on titanium films previously deposited over glass substrate. Ti films of thickness 0.1 μm were heated by Nd:YAG laser pulses in air. The oxide tracks were created by moving the samples with a constant speed of 2 mm/s, under the laser action. The micro-topographic analysis of the tracks was performed by a microprofiler. The results taken along a straight line perpendicular to the track axis revealed a Gaussian profile that closely matches the laser's spatial mode profile, indicating the effectiveness of the surface temperature gradient on the film's growth process. The sample's micro-Raman spectra showed two strong bands at 447 and 612 cm -1 associated with the TiO 2 structure. This is a strong indication that thermo-oxidation reactions took place at the Ti film surface that reached an estimated temperature of 1160 K just due to the action of the first pulse. The results obtained from the numerical integration of the analytical equation which describes the oxidation rate (Wagner equation) are in agreement with the experimental data for film thickness in the high laser intensity region. This shows the partial accuracy of the one-dimensional model adopted for describing the film growth rate. © 2001 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a model for the condensate of dipolar atoms or molecules, in which the dipole-dipole interaction (DDI) is periodically modulated in space due to a periodic change of the local orientation of the permanent dipoles, imposed by the corresponding structure of an external field (the necessary field can be created, in particular, by means of magnetic lattices, which are available to the experiment). The system represents a realization of a nonlocal nonlinear lattice, which has a potential to support various spatial modes. By means of numerical methods and variational approximation (VA), we construct bright one-dimensional solitons in this system and study their stability. In most cases, the VA provides good accuracy and correctly predicts the stability by means of the Vakhitov-Kolokolov criterion. It is found that the periodic modulation may destroy some solitons, which exist in the usual setting with unmodulated DDI and can create stable solitons in other cases, not verified in the absence of modulations. Unstable solitons typically transform into persistent localized breathers. The solitons are often mobile, with inelastic collisions between them leading to oscillating localized modes. © 2013 American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC x GC) is a powerful technique that provides excellent separation and identification of analytes in highly complex samples with considerable increase in GC peak capacities. However, since second dimension analyses are very fast, detectors with a rapid acquisition rate are required. Over the last years, quite a number of studies have discussed the potential and limitations of the combination GC x GC with a variety of quadrupole mass spectrometers. The present research focuses on the evaluation of qMS effectiveness at a 10,000-amu/s scan speed and 20-Hz scan frequency for the identification (full scan mode acquisition-TIC) and quantification (extracted ion chromatogram) of target pesticide residues in tomato samples. The following MS parameters have been evaluated: number of data points per peak, mass spectrum quality, peak skewing, and sensitivity. The validated proposed GC x GC/qMS method presented satisfactory results in terms of repeatability (coefficient of variation lower than 15%), accuracy (84-117%), and linearity (ranging from 25 to 500 ng/g), while significant enhancement in sensitivity was observed (a factor of around 10) under scan conditions. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this manuscript concerns the structural characterization of graphene membranes and single-walled carbon nanotubes (SWCNTs). The experimental investigation was performed using a wide range of transmission electron microscopy (TEM) techniques, from conventional imaging and diffraction, to advanced interferometric methods, like electron holography and Geometric Phase Analysis (GPA), using a low-voltage optical set-up, to reduce radiation damage in the samples. Electron holography was used to successfully measure the mean electrostatic potential of an isolated SWCNT and that of a mono-atomically thin graphene crystal. The high accuracy achieved in the phase determination, made it possible to measure, for the first time, the valence-charge redistribution induced by the lattice curvature in an individual SWCNT. A novel methodology for the 3D reconstruction of the waviness of a 2D crystal membrane has been developed. Unlike other available TEM reconstruction techniques, like tomography, this new one requires processing of just a single HREM micrograph. The modulations of the inter-planar distances in the HREM image are measured using Geometric Phase Analysis, and used to recover the waviness of the crystal. The method was applied to the case of a folded FGC, and a height variation of 0.8 nm of the surface was successfully determined with nanometric lateral resolution. The adhesion of SWCNTs to the surface of graphene was studied, mixing shortened SWCNTs of different chiralities and FGC membranes. The spontaneous atomic match of the two lattices was directly imaged using HREM, and we found that graphene membranes act as tangential nano-sieves, preferentially grafting achiral tubes to their surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wearable inertial and magnetic measurements units (IMMU) are an important tool for underwater motion analysis because they are swimmer-centric, they require only simple measurement set-up and they provide the performance results very quickly. In order to estimate 3D joint kinematics during motion, protocols were developed to transpose the IMMU orientation estimation to a biomechanical model. The aim of the thesis was to validate a protocol originally propositioned to estimate the joint angles of the upper limbs during one-degree-of-freedom movements in dry settings and herein modified to perform 3D kinematics analysis of shoulders, elbows and wrists during swimming. Eight high-level swimmers were assessed in the laboratory by means of an IMMU while simulating the front crawl and breaststroke movements. A stereo-photogrammetric system (SPS) was used as reference. The joint angles (in degrees) of the shoulders (flexion-extension, abduction-adduction and internal-external rotation), the elbows (flexion-extension and pronation-supination), and the wrists (flexion-extension and radial-ulnar deviation) were estimated with the two systems and compared by means of root mean square errors (RMSE), relative RMSE, Pearson’s product-moment coefficient correlation (R) and coefficient of multiple correlation (CMC). Subsequently, the athletes were assessed during pool swimming trials through the IMMU. Considering both swim styles and all joint degrees of freedom modeled, the comparison between the IMMU and the SPS showed median values of RMSE lower than 8°, representing 10% of overall joint range of motion, high median values of CMC (0.97) and R (0.96). These findings suggest that the protocol accurately estimated the 3D orientation of the shoulders, elbows and wrists joint during swimming with accuracy adequate for the purposes of research. In conclusion, the proposed method to evaluate the 3D joint kinematics through IMMU was revealed to be a useful tool for both sport and clinical contexts.