910 resultados para Computer network protocols.
Resumo:
The new technologies for Knowledge Discovery from Databases (KDD) and data mining promise to bring new insights into a voluminous growing amount of biological data. KDD technology is complementary to laboratory experimentation and helps speed up biological research. This article contains an introduction to KDD, a review of data mining tools, and their biological applications. We discuss the domain concepts related to biological data and databases, as well as current KDD and data mining developments in biology.
Resumo:
The Multicenter Australian Study of Epidural Anesthesia and Analgesia in Major Surgery (The MASTER Trial) was designed to evaluate the possible benefit of epidural block in improving outcome in high-risk patients. The trial began in 1995 and is scheduled to reach the planned sample size of 900 during 2001. This paper describes the trial design and presents data comparing 455 patients randomized in 21 institutions in Australia, Hong Kong, and Malaysia, with 237 patients from the same hospitals who were eligible but not randomized. Nine categories of high-risk patients were defined as entry criteria for the trial. Protocols for ethical review, informed consent, randomization, clinical anesthesia and analgesia, and perioperative management were determined following extensive consultation with anesthesiologists throughout Australia. Clinical and research information was collected in participating hospitals by research staff who may not have been blind to allocation. Decisions about the presence or absence of endpoints were made primarily by a computer algorithm, supplemented by blinded clinical experts. Without unblinding the trial, comparison of eligibility criteria and incidence of endpoints between randomized and nonrandomized patients showed only small differences. We conclude that there is no strong evidence of important demographic or clinical differences between randomized and nonrandomized patients eligible for the MASTER Trial. Thus, the trial results are likely to be broadly generalizable. Control Clin Trials 2000;21:244-256 (C) Elsevier Science Inc. 2000.
Resumo:
Dual-energy X-ray absorptiometry (DXA) is a widely used method for measuring bone mineral in the growing skeleton. Because scan analysis in children offers a number of challenges, we compared DXA results using six analysis methods at the total proximal femur (PF) and five methods at the femoral neck (FN), In total we assessed 50 scans (25 boys, 25 girls) from two separate studies for cross-sectional differences in bone area, bone mineral content (BMC), and areal bone mineral density (aBMD) and for percentage change over the short term (8 months) and long term (7 years). At the proximal femur for the short-term longitudinal analysis, there was an approximate 3.5% greater change in bone area and BMC when the global region of interest (ROI) was allowed to increase in size between years as compared with when the global ROI was held constant. Trend analysis showed a significant (p < 0.05) difference between scan analysis methods for bone area and BMC across 7 years. At the femoral neck, cross-sectional analysis using a narrower (from default) ROI, without change in location, resulted in a 12.9 and 12.6% smaller bone area and BMC, respectively (both p < 0.001), Changes in FN area and BMC over 8 months were significantly greater (2.3 %, p < 0.05) using a narrower FN rather than the default ROI, Similarly, the 7-year longitudinal data revealed that differences between scan analysis methods were greatest when the narrower FN ROI was maintained across all years (p < 0.001), For aBMD there were no significant differences in group means between analysis methods at either the PF or FN, Our findings show the need to standardize the analysis of proximal femur DXA scans in growing children.
Resumo:
The influence of initial perturbation geometry and material propel-ties on final fold geometry has been investigated using finite-difference (FLAC) and finite-element (MARC) numerical models. Previous studies using these two different codes reported very different folding behaviour although the material properties, boundary conditions and initial perturbation geometries were similar. The current results establish that the discrepancy was not due to the different computer codes but due to the different strain rates employed in the two previous studies (i.e. 10(-6) s(-1) in the FLAC models and 10(-14) s(-1) in the MARC models). As a result, different parts of the elasto-viscous rheological field were bring investigated. For the same material properties, strain rate and boundary conditions, the present results using the two different codes are consistent. A transition in Folding behaviour, from a situation where the geometry of initial perturbation determines final fold shape to a situation where material properties control the final geometry, is produced using both models. This transition takes place with increasing strain rate, decreasing elastic moduli or increasing viscosity (reflecting in each case the increasing influence of the elastic component in the Maxwell elastoviscous rheology). The transition described here is mechanically feasible but is associated with very high stresses in the competent layer (on the order of GPa), which is improbable under natural conditions. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for a(n)b(n)c(n), a context-sensitive language. The additional difficulty with a(n)b(n)c(n), compared with the context-free language a(n)b(n), consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.
Resumo:
This paper discusses an object-oriented neural network model that was developed for predicting short-term traffic conditions on a section of the Pacific Highway between Brisbane and the Gold Coast in Queensland, Australia. The feasibility of this approach is demonstrated through a time-lag recurrent network (TLRN) which was developed for predicting speed data up to 15 minutes into the future. The results obtained indicate that the TLRN is capable of predicting speed up to 5 minutes into the future with a high degree of accuracy (90-94%). Similar models, which were developed for predicting freeway travel times on the same facility, were successful in predicting travel times up to 15 minutes into the future with a similar degree of accuracy (93-95%). These results represent substantial improvements on conventional model performance and clearly demonstrate the feasibility of using the object-oriented approach for short-term traffic prediction. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Peptides that induce and recall T-cell responses are called T-cell epitopes. T-cell epitopes may be useful in a subunit vaccine against malaria. Computer models that simulate peptide binding to MHC are useful for selecting candidate T-cell epitopes since they minimize the number of experiments required for their identification. We applied a combination of computational and immunological strategies to select candidate T-cell epitopes. A total of 86 experimental binding assays were performed in three rounds of identification of HLA-All binding peptides from the six preerythrocytic malaria antigens. Thirty-six peptides were experimentally confirmed as binders. We show that the cyclical refinement of the ANN models results in a significant improvement of the efficiency of identifying potential T-cell epitopes. (C) 2001 by Elsevier Science Inc.
Resumo:
Axial X-ray Computed tomography (CT) scanning provides a convenient means of recording the three-dimensional form of soil structure. The technique has been used for nearly two decades, but initial development has concentrated on qualitative description of images. More recently, increasing effort has been put into quantifying the geometry and topology of macropores likely to contribute to preferential now in soils. Here we describe a novel technique for tracing connected macropores in the CT scans. After object extraction, three-dimensional mathematical morphological filters are applied to quantify the reconstructed structure. These filters consist of sequences of so-called erosions and/or dilations of a 32-face structuring element to describe object distances and volumes of influence. The tracing and quantification methodologies were tested on a set of undisturbed soil cores collected in a Swiss pre-alpine meadow, where a new earthworm species (Aporrectodea nocturna) was accidentally introduced. Given the reduced number of samples analysed in this study, the results presented only illustrate the potential of the method to reconstruct and quantify macropores. Our results suggest that the introduction of the new species induced very limited chance to the soil structured for example, no difference in total macropore length or mean diameter was observed. However. in the zone colonised by, the new species. individual macropores tended to have a longer average length. be more vertical and be further apart at some depth. Overall, the approach proved well suited to the analysis of the three-dimensional architecture of macropores. It provides a framework for the analysis of complex structures, which are less satisfactorily observed and described using 2D imaging. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The finite element method is used to simulate coupled problems, which describe the related physical and chemical processes of ore body formation and mineralization, in geological and geochemical systems. The main purpose of this paper is to illustrate some simulation results for different types of modelling problems in pore-fluid saturated rock masses. The aims of the simulation results presented in this paper are: (1) getting a better understanding of the processes and mechanisms of ore body formation and mineralization in the upper crust of the Earth; (2) demonstrating the usefulness and applicability of the finite element method in dealing with a wide range of coupled problems in geological and geochemical systems; (3) qualitatively establishing a set of showcase problems, against which any numerical method and computer package can be reasonably validated. (C) 2002 Published by Elsevier Science B.V.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
High performance video codec is mandatory for multimedia applications such as video-on-demand and video conferencing. Recent research has proposed numerous video coding techniques to meet the requirement in bandwidth, delay, loss and Quality-of-Service (QoS). In this paper, we present our investigations on inter-subband self-similarity within the wavelet-decomposed video frames using neural networks, and study the performance of applying the spatial network model to all video frames over time. The goal of our proposed method is to restore the highest perceptual quality for video transmitted over a highly congested network. Our contributions in this paper are: (1) A new coding model with neural network based, inter-subband redundancy (ISR) prediction for video coding using wavelet (2) The performance of 1D and 2D ISR prediction, including multiple levels of wavelet decompositions. Our result shows a short-term quality enhancement may be obtained using both 1D and 2D ISR prediction.
Resumo:
Background: Concerns exist regarding the effect of radiation dose from paediatric pelvic CT scans and the potential later risk of radiation-induced neoplasm and teratogenic outcomes in these patients. Objective: To assess the diagnostic quality of CT images of the paediatric pelvis using either reduced mAs or increased pitch compared with standard settings. Materials and methods: A prospective study of pelvic CT scans of 105 paediatric patients was performed using one of three protocols: (1) 31 at a standard protocol of 200 mA with rotation time of 0.75 s at 120 kVp and a pitch factor approximating 1.4; (2) 31 at increased pitch factor approaching 2 and 200 mA; and (3) 43 at a reduced setting of 100 mA and a pitch factor of 1.4. All other settings remained the same in all three groups. Image quality was assessed by radiologists blinded to the protocol used in each scan. Results: No significant difference was found between the quality of images acquired at standard settings and those acquired at half the standard mAs. The use of increased pitch factor resulted in a higher proportion of poor images. Conclusions: Images acquired at 120 kVp using 75 mAs are equivalent in diagnostic quality to those acquired at 150 mAs. Reduced settings can provide useful imaging of the paediatric pelvis and should be considered as a standard protocol in these situations.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.