890 resultados para rough sets
Resumo:
DNA Microarray is a powerful tool to measure the level of a mixed population of nucleic acids at one time, which has great impact in many aspects of life sciences research. In order to distinguish nucleic acids with very similar composition by hybridization, it is necessary to design microarray probes with high specificities and sensitivities. Highly specific probes correspond to probes having unique DNA sequences; whereas highly sensitive probes correspond to those with melting temperature within a desired range and having no secondary structure. The selection of these probes from a set of functional DNA sequences (exons) constitutes a computationally expensive discrete non-linear search problem. We delegate the search task to a simple yet effective Evolution Strategy algorithm. The computational efficiency is also greatly improved by making use of an available bioinformatics tool.
Resumo:
The number of mammalian transcripts identified by full-length cDNA projects and genome sequencing projects is increasing remarkably. Clustering them into a strictly nonredundant and comprehensive set provides a platform for functional analysis of the transcriptome and proteome, but the quality of the clustering and predictive usefulness have previously required manual curation to identify truncated transcripts and inappropriate clustering of closely related sequences. A Representative Transcript and Protein Sets (RTPS) pipeline was previously designed to identify the nonredundant and comprehensive set of mouse transcripts based on clustering of a large mouse full-length cDNA set (FANTOM2). Here we propose an alternative method that is more robust, requires less manual curation, and is applicable to other organisms in addition to mouse. RTPSs of human, mouse, and rat have been produced by this method and used for validation. Their comprehensiveness and quality are discussed by comparison with other clustering approaches. The RTPSs are available at ftp://fantom2.gsc.riken.go.jp/RTPS/. (C). 2004 Elsevier Inc. All rights reserved.
Resumo:
Objective: Secondary analyses of a previously conducted 1-year randomized controlled trial were performed to assess the application of responder criteria in patients with knee osteoarthritis (OA) using different sets of responder criteria developed by the Osteoarthritis Research Society International (OARSI) (Propositions A and B) for intra-articular drugs and Outcome Measures in Arthritis Clinical Trials (OMERACT)-OARSI (Proposition D). Methods: Two hundred fifty-five patients with knee OA were randomized to appropriate care with hylan G-F 20 (AC + H) or appropriate care without hylan G-F 20 (AC). A patient was defined as a responder at month 12 based on change in Western Ontario and McMaster Universities Osteoarthritis Index pain and function (0-100 normalized scale) and patient global assessment of OA in the study knee (at least one-category improvement in very poor, poor, fair, good and very good). All propositions incorporate both minimum relative and absolute changes. Results: Results demonstrated that statistically significant differences in responders between treatment groups, in favor of hylan G-F 20, were detected for Proposition A (AC + H = 53.5%, AC = 25.2%), Proposition B (AC + H = 56.7%, AC = 32.3%) and Proposition D (AC + H = 66.9%, AC = 42.5%). The highest effectiveness in both treatment groups was observed with Proposition D, whereas Proposition A resulted in the lowest effectiveness in both treatment groups. The treatment group differences always exceeded the required 20% minimum clinically important difference between groups established a priori, and were 28.3%, 24.4% and 24.4% for Propositions A, B and D, respectively. Conclusion: This analysis provides evidence for the capacity of OARSI and OMERACT-OARSI responder criteria to detect clinically important statistically detectable differences between treatment groups. (C) 2004 OsteoArthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Resumo:
Previously the process of finding critical sets in Latin squares has been inside cumbersome by the complexity and number of Latin trades that, must be constructed. In this paper we develop a theory of Latin trades that yields more transparent constructions. We use these Latin trades to find a new class of critical sets for Latin squares which are a product of the Latin square of order 2 with a. back circulant Latin square of odd order.
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
Van der Waals forces often dominate interactions and adhesion between fine particles and, in turn, decisively influence the bulk behaviour of powders. However, so far there is no effective means to characterize the adhesive behaviour of such particles. A complication is that most powder particles have rough surfaces, and it is the asperities on the surfaces that touch, confounding the actual surface that is in contact. Conventional approaches using surface energy provide limited information regarding adhesion, and pull-off forces measured through atomic force microscope (AFM) are highly variable and difficult to interpret. In this paper we develop a model which combines the Rumpf-Rabinovich and the JKR-DMT theories to account simultaneously for the effects of surface roughness and deformation on adhesion. This is applied to a 'characteristic asperity' which may be easily obtained from AFM measurements. The concept of adhesiveness, a material property reflecting the influences of elastic deformability, surface roughness, and interfacial surface energy, is introduced as an efficient and quantitative measure of the adhering tendency of a powder. Furthermore, a novel concept of specific adhesiveness is proposed as a convenient tool for characterizing and benchmarking solid materials. This paper provides an example to illustrate the use of the proposed theories. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Two experiments were conducted to test the hypothesis that toddlers have access to an analog-magnitude number representation that supports numerical reasoning about relatively large numbers. Three-year-olds were presented with subtraction problems in which initial set size and proportions subtracted were systematically varied. Two sets of cookies were presented and then covered The experimenter visibly subtracted cookies from the hidden sets, and the children were asked to choose which of the resulting sets had more. In Experiment 1, performance was above chance when high proportions of objects (3 versus 6) were subtracted from large sets (of 9) and for the subset of older participants (older than 3 years, 5 months; n = 15), performance was also above chance when high proportions (10 versus 20) were subtracted from the very large sets (of 30). In Experiment 2, which was conducted exclusively with older 3-year-olds and incorporated an important methodological control, the pattern of results for the subtraction tasks was replicated In both experiments, success on the tasks was not related to counting ability. The results of these experiments support the hypothesis that young children have access to an analog-magnitude system for representing large approximate quantities, as performance on these subtraction tasks showed a Webers Law signature, and was independent of conventional number knowledge.
Resumo:
It is shown that in some cases it is possible to reconstruct a block design D uniquely from incomplete knowledge of a minimal defining set for D. This surprising result has implications for the use of minimal defining sets in secret sharing schemes.