832 resultados para accuracy analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate recognition of cancer subtypes is very significant in clinic. Especially, the DNA microarray gene expression technology is applied to diagnosing and recognizing cancer types. This paper proposed a method of that recognized cancer subtypes based on geometrical learning. Firstly, the cancer genes expression profiles data was pretreated and selected feature genes by conventional method; then the expression data of feature genes in the training samples was construed each convex hull in the high-dimensional space using training algorithm of geometrical learning, while the independent test set was tested by the recognition algorithm of geometrical learning. The method was applied to the human acute leukemia gene expression data. The accuracy rate reached to 100%. The experiments have proved its efficiency and feasibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of new single-step methods and their corresponding algorithms with automatic step size adjustment for model equations of fiber Raman amplifiers are proposed and compared in this paper. On the basis of the Newton-Raphson method, multiple shooting algorithms for the two-point boundary value problems involved in solving Raman amplifier propagation equations are constructed. A verified example shows that, compared with the traditional Runge-Kutta methods, the proposed methods can increase the accuracy by more than two orders of magnitude under the same conditions. The simulations for Raman amplifier propagation equations demonstrate that our methods can increase the computing speed by more than 5 times, extend the step size significantly, and improve the stability in comparison with the Dormand-Prince method. The numerical results show that the combination of the multiple shooting algorithms and the proposed methods has the capacity to rapidly and effectively solve the model equations of multipump Raman amplifiers under various conditions such as co-, counter- and bi-directionally pumped schemes, as well as dual-order pumped schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers. (c) 2006 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over last two decades, numerous studies have used remotely sensed data from the Advanced Very High Resolution Radiometer (AVHRR) sensors to map land use and land cover at large spatial scales, but achieved only limited success. In this paper, we employed an approach that combines both AVHRR images and geophysical datasets (e.g. climate, elevation). Three geophysical datasets are used in this study: annual mean temperature, annual precipitation, and elevation. We first divide China into nine bio-climatic regions, using the long-term mean climate data. For each of nine regions, the three geophysical data layers are stacked together with AVHRR data and AVHRR-derived vegetation index (Normalized Difference Vegetation Index) data, and the resultant multi-source datasets were then analysed to generate land-cover maps for individual regions, using supervised classification algorithms. The nine land-cover maps for individual regions were assembled together for China. The existing land-cover dataset derived from Landsat Thematic Mapper (TM) images was used to assess the accuracy of the classification that is based on AVHRR and geophysical data. Accuracy of individual regions varies from 73% to 89%, with an overall accuracy of 81% for China. The results showed that the methodology used in this study is, in general, feasible for large-scale land-cover mapping in China.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial relations, reflecting the complex association between geographical phenomena and environments, are very important in the solution of geographical issues. Different spatial relations can be expressed by indicators which are useful for the analysis of geographical issues. Urbanization, an important geographical issue, is considered in this paper. The spatial relationship indicators concerning urbanization are expressed with a decision table. Thereafter, the spatial relationship indicator rules are extracted based on the application of rough set theory. The extraction process of spatial relationship indicator rules is illustrated with data from the urban and rural areas of Shenzhen and Hong Kong, located in the Pearl River Delta. Land use vector data of 1995 and 2000 are used. The extracted spatial relationship indicator rules of 1995 are used to identify the urban and rural areas in Zhongshan, Zhuhai and Macao. The identification accuracy is approximately 96.3%. Similar procedures are used to extract the spatial relationship indicator rules of 2000 for the urban and rural areas in Zhongshan, Zhuhai and Macao. An identification accuracy of about 83.6% is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The broadcast soccer video is usually recorded by one main camera, which is constantly gazing somewhere of playfield where a highlight event is happening. So the camera parameters and their variety have close relationship with semantic information of soccer video, and much interest has been caught in camera calibration for soccer video. The previous calibration methods either deal with goal scene, or have strict calibration conditions and high complexity. So, it does not properly handle the non-goal scene such as midfield or center-forward scene. In this paper, based on a new soccer field model, a field symbol extraction algorithm is proposed to extract the calibration information. Then a two-stage calibration approach is developed which can calibrate camera not only for goal scene but also for non-goal scene. The preliminary experimental results demonstrate its robustness and accuracy. (c) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design and performance of a miniaturized chip-type tris(2,2'-bipyridyl)ruthenium(II) [Ru(bpy)(3)(2+)] electrochemiluminescence (ECL) detection cell suitable for both capillary electrophoresis (CE) and flow injection (FI) analysis are described. The cell was fabricated from two pieces of glass (20 x 15 x 1.7 mm), and the 0.5-mm-diameter platinum disk was used as working electrode held at +1.15 V (vs silver wire quasi-reference), the stainless steel guide tubing as counter electrode, and the silver wire as quasi-reference electrode. The performance traits of the cell in both CE and FI modes were evaluated using tripropylamine, proline, and oxalate and compared favorably to those reported for CE and FI detection cells. The advantages of versatility, sensitivity, and accuracy make the device attractive for the routine analysis of amine-containing species or oxalate by CE and FI with Ru(bPY)(3)(2divided by) ECL detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, we describe an improved thermal fractionation technique used to characterize the polydispersity of crystalline ethylene sequence length (CESL) of ethylene/alpha -olefin copolymers. After stepwise isothermal crystallization, the crystalline ethylene sequences are sorted into groups by their lengths. The CESLs are estimated using melting points of known hydrocarbons. The content of each group is determined using the calibrated peak area. The statistical terms: the arithmetic mean (L) over bar (n), the weighted mean (L) over bar (w) and the broadness index I = (L) over bar (w)/(L) over bar (n) are used to describe the distribution of CESL. Results show that improved thermal fractionation technique can quantitatively characterize the polydispersity of CESL with a high degree of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for simultaneous spectrophotometric determination of Zn, Cd and Hg using 2-(5-Br-2-pyridylazo)-5-diethylaminophenol as the color developing reagent was proposed. The absorption spectra of these three complexes have similar features with severe overlap in visible spectral range. For resolving these spectra, hybrid linear analysis was used, and the pure spectrum of each component was obtained from the calibration mixtures by least squares method. The effects of reaction condition, selection of wavelengths, determination of pure spectrum and additivity of absorbances etc. on the determination were discussed. The proposed method offers the advantages of simple, rapid, and accuracy. It has been successfully applied to the simultaneous determination of Zn, Cd and Hg in synthetic sample. A comparison was also made with the partial least squares method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid linear analysis (HLA) was applied to resolution of overlapping spectra of Fe3+-salicylfluorone and Al3+-salicylfluorone complexes and simultaneous spectrophotometric determination of Fe3+ and Al3+. The absorbance matrix of 7 standard mixtures at 41 measuring points ranged from the wavelength of 550 nm to 630 nm was used for calibration. To avoid the effect of interaction between the two components on the determination, the column vector of K matrix obtained from the standard mixtures with least squares was used as the pure spectrum of component. The recoveries of the two elements for the analysis of the synthetic samples were 93.3% similar to 107.5% in the range of the concentration ratio of Fe3+:Al3+ = 10:1 to 1:8. Comparing with the partial least squares (PIS) model, the HLA method was simple, accuracy and precise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model for representing music scores in a form suitable for general processing by a music-analyst-programmer is proposed and implemented. Typical input to the model consists of one or more pieces of music which are encoded in a file-based score representation. File-based representations are in a form unsuited for general processing, as they do not provide a suitable level of abstraction for a programmer-analyst. Instead, a representation is created giving a programmer's view of the score. This frees the analyst-programmer from implementation details, that otherwise would form a substantial barrier to progress. The score representation uses an object-oriented approach to create a natural and robust software environment for the musicologist. The system is used to explore ways in which it could benefit musicologists. Methodologies for analysing music corpora are presented in a series of analytic examples which illustrate some of the potential of this model. Proving hypotheses or performing analysis on corpora involves the construction of algorithms. Some unique aspects of using this score model for corpus-based musicology are: - Algorithms impose a discipline which arises from the necessity for formalism. - Automatic analysis enables musicologists to complete tasks that otherwise would be infeasible because of limitations of their energy, attentiveness, accuracy and time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The phenomenon of migration has been widely researched by the social sciences. Theories regarding the migrant have been developed in terms of the oppressive social context that is often encountered, proposing different alternatives to understand and overcome such oppression. Through the current project, an alternative view is presented that first, questions the accuracy of the social theories of migration and second, proposes an alternative understanding of this experience. Martin Heidegger’s phenomenology of Being offers a contextualized view of existence that nonetheless includes elements of our experience that are shared due to a common mode of being. I use Heidegger’s philosophy in order to broaden the understanding of the migrant’s experience analyzing those elements that he identifies as shared (for instance: human sociability, a desire for a home, the uncanny, etc.) and comparing them with common issues raised by migrants (identity, homesickness, belonging, etc.). In this way, I intend to present a more complete picture of the experience of migration that considers both empirical evidence of individual migrants and an existential analysis that incorporates the defining elements of our world and our existence as crucial means to understand any experience, including that of migration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.

In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.

Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.

For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.

For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.

Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the 1970’s and 1980’s, the late Dr Norman Holme undertook extensive towed sledge surveys in the English Channel and some in the Irish Sea. Only a minority of the resulting images were analysed and reported before his death in 1989 but logbooks, video and film material has been archived in the National Marine Biological Library (NMBL) in Plymouth. A scoping study was therefore commissioned by the Joint Nature Conservation Committee and as a part of the Mapping European Seabed Habitats (MESH) project to identify the value of the material archived and the procedure and cost to undertake further work. The results of the scoping study are: 1. NMBL archives hold 106 videotapes (reel-to-reel Sony HD format) and 59 video cassettes (including 15 from the Irish Sea) in VHS format together with 90 rolls of 35 mm colour transparency film (various lengths up to about 240 frames per film). These are stored in the Archive Room, either in a storage cabinet or in original film canisters. 2. Reel-to-reel material is extensive and had already been selectively copied to VHS cassettes. The cost of transferring it to an accepted ‘long-life’ medium (Betamax) would be approximately £15,000. It was not possible to view the tapes as a suitable machine was not located. The value of the tapes is uncertain but they are likely to become beyond salvation within one to two years. 3. Video cassette material is in good condition and is expected to remain so for several more years at least. Images viewed were generally of poor quality and the speed of tow often makes pictures blurred. No immediate action is required. 4. Colour transparency films are in good condition and the images are very clear. They provide the best source of information for mapping seabed biotopes. They should be scanned to digital format but inexpensive fast copying is problematic as there are no between-frame breaks between images and machines need to centre the image based on between-frame breaks. The minimum cost to scan all of the images commercially is approximately £6,000 and could be as much as £40,000 on some quotations. There is a further cost in coding and databasing each image and, all-in-all it would seem most economic to purchase a ‘continuous film’ scanner and undertake the work in-house. 5. Positional information in ships logs has been matched to films and to video tapes. Decca Chain co-ordinates recorded in the logbooks have been converted to latitude and longitude (degrees, minutes and seconds) and a further routine developed to convert to degrees and decimal degrees required for GIS mapping. However, it is unclear whether corrections to Decca positions were applied at the time the position was noted. Tow tracks have been mapped onto an electronic copy of a Hydrographic Office chart. 6. The positions of start and end of each tow were entered to a spread sheet so that they can be displayed on GIS or on a Hydrographic Office Chart backdrop. The cost of the Hydrographic Office chart backdrop at a scale of 1:75,000 for the whole area was £458 incl. VAT. 7. Viewing all of the video cassettes to note habitats and biological communities, even by an experienced marine biologist, would take at least in the order of 200 hours and is not recommended. English Channel towed sledge seabed images. Phase 1: scoping study and example analysis. 6 8. Once colour transparencies are scanned and indexed, viewing to identify seabed habitats and biological communities would probably take about 100 hours for an experienced marine biologist and is recommended. 9. It is expected that identifying biotopes along approximately 1 km lengths of each tow would be feasible although uncertainties about Decca co-ordinate corrections and exact positions of images most likely gives a ±250 m position error. More work to locate each image accurately and solve the Decca correction question would improve accuracy of image location. 10. Using codings (produced by Holme to identify different seabed types), and some viewing of video and transparency material, 10 biotopes have been identified, although more would be added as a result of full analysis. 11. Using the data available from the Holme archive, it is possible to populate various fields within the Marine Recorder database. The overall ‘survey’ will be ‘English Channel towed video sled survey’. The ‘events’ become the 104 tows. Each tow could be described as four samples, i.e. the start and end of the tow and two areas in the middle to give examples along the length of the tow. These samples would have their own latitude/longitude co-ordinates. The four samples would link to a GIS map. 12. Stills and video clips together with text information could be incorporated into a multimedia presentation, to demonstrate the range of level seabed types found along a part of the northern English Channel. More recent images taken during SCUBA diving of reef habitats in the same area as the towed sledge surveys could be added to the Holme images.