904 resultados para Test data
Resumo:
A migration of Helicoverpa punctigera (Wallengren), Heliothis punctifera (Walker) and Agrotis munda Walker was tracked from Cameron Corner (29degrees00'S, 141degrees00'E) in inland Australia to the Wilcannia region, approximately 400 km to the south-east. A relatively isolated source population was located using a distribution model to predict winter breeding, and confirmed by surveys using sweep netting for larvae. When a synoptic weather pattern likely to produce suitable conditions for migration developed, moths were trapped in the source region. The next morning a simulation model of migration using wind-field data generated by a numerical weather-prediction model was run. Surveys using sweep netting for larvae, trapping and flush counts were then conducted in and around the predicted moth fallout area, approximately 400 km to the south-east. Pollen carried on the probosces of moths caught in this area was compared with that on moths caught in the source area. The survey data and pollen comparisons provided evidence that migration had occurred, and that the migration model gave accurate estimation of the fallout region. The ecological and economic implications of such migrations are discussed.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
Purpose: The aim of this study was to assess the accuracy of a (CO2)-C-13 breath test for the prediction of short-duration energy expenditure. Methods: Eight healthy volunteers walked at 1.5 km.h(-1) for 60 min followed by 60-min recovery. During this time, the energy cost of physical activity was measured via respiratory calorimetry and a C-13 bicarbonate breath test. A further eight subjects were tested using the same two methods during a 60-min cycle at 0.5 kp. 30 ipm followed by a 60-min recovery. The rate of appearance of (CO2)-C-13, (RaCO2) was measured and the mean ratio, (V) over dot CO2/RaCO2 was used to calculate energy expenditure using the isotopic approach. Results: As would be expected, there was a significant difference in the energy cost of walking and cycling using both methods (P < 0.05). However. no significant differences were observed between respiratory calorimetry and the isotope method for measurement of energy expenditure while walking or cycling. Conclusions: These data suggest that the C-13 breath test is a valid method that can be used to measure the energy cost of short duration physical activity in a field setting.
Resumo:
The tests that are currently available for the measurement of overexpression of the human epidermal growth factor-2 (HER2) in breast cancer have shown considerable problems in accuracy and interlaboratory reproducibility. Although these problems are partly alleviated by the use of validated, standardised 'kits', there may be considerable cost involved in their use. Prior to testing it may therefore be an advantage to be able to predict from basic pathology data whether a cancer is likely to overexpress HER2. In this study, we have correlated pathology features of cancers with the frequency of HER2 overexpression assessed by immunohistochemistry (IHC) using HercepTest (Dako). In addition, fluorescence in situ hybridisation (FISH) has been used to re-test the equivocal cancers and interobserver variation in assessing HER2 overexpression has been examined by a slide circulation scheme. Of the 1536 cancers, 1144 (74.5%) did not overexpress HER2. Unequivocal overexpression (3+ by IHC) was seen in 186 cancers (12%) and an equivocal result (2+ by IHC) was seen in 206 cancers (13%). Of the 156 IHC 3+ cancers for which complete data was available, 149 (95.5%) were ductal NST and 152 (97%) were histological grade 2 or 3. Only 1 of 124 infiltrating lobular carcinomas (0.8%) showed HER2 overexpression. None of the 49 'special types' of carcinoma showed HER2 overexpression. Re-testing by FISH of a proportion of the IHC 2+ cancers showed that only 25 (23%) of those assessable exhibited HER2 gene amplification, but 46 of the 47 IHC 3+ cancers (98%) were confirmed as showing gene amplification. Circulating slides for the assessment of HER2 score showed a moderate level of agreement between pathologists (kappa 0.4). As a result of this study we would advocate consideration of a triage approach to HER-2 testing. Infiltrating lobular and special types of carcinoma may not need to be routinely tested at presentation nor may grade 1 NST carcinomas in which only 1.4% have been shown to overexpress HER2. Testing of these carcinomas may be performed when HER2 status is required to assist in therapeutic or other clinical/prognostic decision-making. The highest yield of HER2 overexpressing carcinomas is seen in the grade 3 NST subgroup in which 24% are positive by IHC. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.
Resumo:
Chlorophyll fluorescence measurements have a wide range of applications from basic understanding of photosynthesis functioning to plant environmental stress responses and direct assessments of plant health. The measured signal is the fluorescence intensity (expressed in relative units) and the most meaningful data are derived from the time dependent increase in fluorescence intensity achieved upon application of continuous bright light to a previously dark adapted sample. The fluorescence response changes over time and is termed the Kautsky curve or chlorophyll fluorescence transient. Recently, Strasser and Strasser (1995) formulated a group of fluorescence parameters, called the JIP-test, that quantify the stepwise flow of energy through Photosystem II, using input data from the fluorescence transient. The purpose of this study was to establish relationships between the biochemical reactions occurring in PS II and specific JIP-test parameters. This was approached using isolated systems that facilitated the addition of modifying agents, a PS II electron transport inhibitor, an electron acceptor and an uncoupler, whose effects on PS II activity are well documented in the literature. The alteration to PS II activity caused by each of these compounds could then be monitored through the JIP-test parameters and compared and contrasted with the literature. The known alteration in PS II activity of Chenopodium album atrazine resistant and sensitive biotypes was also used to gauge the effectiveness and sensitivity of the JIP-test. The information gained from the in vitro study was successfully applied to an in situ study. This is the first in a series of four papers. It shows that the trapping parameters of the JIP-test were most affected by illumination and that the reduction in trapping had a run-on effect to inhibit electron transport. When irradiance exposure proceeded to photoinhibition, the electron transport probability parameter was greatly reduced and dissipation significantly increased. These results illustrate the advantage of monitoring a number of fluorescence parameters over the use of just one, which is often the case when the F-V/F-M ratio is used.
Resumo:
This study compares the performance of the Quickscreen and Default protocols of the ILO-96 Otodynamics Analyzer in recording transient evoked otoacoustic emissions (TEOAEs) from adults using clinical decision analysis. Data were collected from 25 males (mean age = 29.0 years, SD = 6.8) and 35 females (mean age = 28.1 years, SD = 9.6). The results showed that the mean signal-to-noise ratios obtained from the Quickscreen were significantly greater than those from the Default protocol at 1,2, and 4 kHz. The comparison of the performance of the two protocols, based on the results using receiver operating characteristics curves, revealed a higher performance of the Quickscreen than the Default protocol at 1 and 4 kHz but not at 2 kHz. In view of the enhanced performance of the Quickscreen over the Default protocol in general, the routine use of the Default protocol for testing adults in audiology clinics should be reconsidered.
Resumo:
Wireless medical systems are comprised of four stages, namely the medical device, the data transport, the data collection and the data evaluation stages. Whereas the performance of the first stage is highly regulated, the others are not. This paper concentrates on the data transport stage and argues that it is necessary to establish standardized tests to be used by medical device manufacturers to provide comparable results concerning the communication performance of the wireless networks used to transport medical data. Besides, it suggests test parameters and procedures to be used to produce comparable communication performance results.
Resumo:
Opposite enantiomers exhibit different NMR properties in the presence of an external common chiral element, and a chiral molecule exhibits different NMR properties in the presence of external enantiomeric chiral elements. Automatic prediction of such differences, and comparison with experimental values, leads to the assignment of the absolute configuration. Here two cases are reported, one using a dataset of 80 chiral secondary alcohols esterified with (R)-MTPA and the corresponding 1H NMR chemical shifts and the other with 94 13C NMR chemical shifts of chiral secondary alcohols in two enantiomeric chiral solvents. For the first application, counterpropagation neural networks were trained to predict the sign of the difference between chemical shifts of opposite stereoisomers. The neural networks were trained to process the chirality code of the alcohol as the input, and to give the NMR property as the output. In the second application, similar neural networks were employed, but the property to predict was the difference of chemical shifts in the two enantiomeric solvents. For independent test sets of 20 objects, 100% correct predictions were obtained in both applications concerning the sign of the chemical shifts differences. Additionally, with the second dataset, the difference of chemical shifts in the two enantiomeric solvents was quantitatively predicted, yielding r2 0.936 for the test set between the predicted and experimental values.
Resumo:
When a paleomagnetic pole is sought for in an igneous body, the host rocks should be subjected to a contact test to assure that the determined paleopole has the age of the intrusion. If the contact test is positive, it precludes the possibility that the measured magnetization is a later effect. Therefore, we investigated the variations of the remanent magnetization along cross-sections of rocks hosting the Foum Zguid dyke (southern Morocco) and the dyke itself. A positive contact test was obtained, but it is mainly related with Chemical/Crystalline Remanent Magnetization due to metasomatic processes in the host-rocks during magma intrusion and cooling, and not only with Thermo-Remanent Magnetization as ordinarily assumed in standard studies. Paleomagnetic data obtained within the dyke then reflect the Earth magnetic field during emplacement of this well-dated (196.9 +/- 1.8 Ma) intrusion.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
25th Annual Conference of the European Cetacean Society, Cadiz, Spain 21-23 March 2011.
Resumo:
O intuito principal desta Tese é criar um interface de Dados entre uma fonte de informação e fornecimento de Rotas para turistas e disponibilizar essa informação através de um sistema móvel interactivo de navegação e visualização desses mesmos dados. O formato tecnológico será portátil e orientado à mobilidade (PDA) e deverá ser prático, intuitivo e multi-facetado, permitindo boa usabilidade a públicos de várias faixas etárias. Haverá uma componente de IA (Inteligência Artificial), que irá usar a informação fornecida para tomar decisões ponderadas tendo em conta uma diversidade de aspectos. O Sistema a desenvolver deverá ser, assim, capaz de lidar com imponderáveis (alterações de rota, gestão de horários, cancelamento de pontos de visita, novos pontos de visita) e, finalmente, deverá ajudar o turista a gerir o seu tempo entre Pontos de Interesse (POI – Points os Interest). Deverá também permitir seguir ou não um dado percurso pré-definido, havendo possibilidade de cenários de exploração de POIs, sugeridos a partir de sugestões in loco, similares a Locais incluídos no trajecto, que se enquadravam no perfil dos Utilizadores. O âmbito geográfico de teste deste projecto será a zona ribeirinha do porto, por ser um ex-líbris da cidade e, simultaneamente, uma zona com muitos desafios ao nível geográfico (com a inclinação) e ao nível do grande número de Eventos e Locais a visitar.
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
This paper focuses on a novel formalization for assessing the five parameter modeling of a photovoltaic cell. An optimization procedure is used as a feasibility problem to find the parameters tuned at the open circuit, maximum power, and short circuit points in order to assess the data needed for plotting the I-V curve. A comparison with experimental results is presented for two monocrystalline PV modules.