969 resultados para on-disk data layout
Resumo:
This paper describes a new and simple method to determine the molecular weight of proteins in dilute solution, with an error smaller than similar to 10%, by using the experimental data of a single small-angle X-ray scattering (SAXS) curve measured on a relative scale. This procedure does not require the measurement of SAXS intensity on an absolute scale and does not involve a comparison with another SAXS curve determined from a known standard protein. The proposed procedure can be applied to monodisperse systems of proteins in dilute solution, either in monomeric or multimeric state, and it has been successfully tested on SAXS data experimentally determined for proteins with known molecular weights. It is shown here that the molecular weights determined by this procedure deviate from the known values by less than 10% in each case and the average error for the test set of 21 proteins was 5.3%. Importantly, this method allows for an unambiguous determination of the multimeric state of proteins with known molecular weights.
Resumo:
An investigation of nucleate boiling on a vertical array of horizontal plain tubes is presented in this paper. Experiments were performed with refrigerant RI 23 at reduced pressures varying from 0.022 to 0.64, tube pitch to diameter ratios of 1.32, 1.53 and 2.00, and heat fluxes from 0.5 to 40 kW/m(2). Brass tubes with external diameters of 19.05 mm and average roughness of 0.12 mu m were used in the experiments. The effect of the tube spacing on the local heat transfer coefficient along the tube array was negligible within the present range of experimental conditions. For partial nucleate boiling, characterized by low heat fluxes, and low reduced pressures, the tube positioning shows a remarkable effect on the heat transfer coefficient. Based on these data, a general correlation for the prediction of the nucleate boiling heat transfer coefficient on a vertical array of horizontal tubes under flooded conditions was proposed. According to this correlation, the ratio between the heat transfer coefficients of a given tube and the lowest tube in the array depends only on the tube row number, the reduced pressure and the heat flux. By using the proposed correlation, most of the experimental heat transfer coefficients obtained in the present study were predicted within +/- 15%. The new correlation compares reasonably well with independent data from the literature. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
In the unlubricated sliding wear of steels the mild-severe and severe-mild wear transitions have long been investigated. The effect of system inputs such as normal load, sliding speed, environment humidity and temperature, material properties, among others, on those transitions have also been studied. Although transitions seem to be caused by microstructural changes, surfaces oxidation and work-hardening, some questions remain regarding the way each aspect is involved. Since the early studies in sliding wear, it has usually been assumed that only the material properties of the softer body influence the wear behavior of contacting surfaces. For example, the Archard equation involves only the hardness of the softer body, without considering the hardness of the harder body. This work aims to discuss the importance of the harder body hardness in determining the wear regime operation. For this, pin-on-disk wear tests were carried out, in which the disk material was always harder than the pin material. Variations of the friction force and vertical displacement of the pin were registered during the tests. A material characterization before and after tests was conducted using stereoscopy and scanning electron microscopy (SEM) methods, in addition to mass loss, surface roughness and microhardness measurements. The wear results confirmed the occurrence of a mild-severe wear transition when the disk hardness was decreased. The disk hardness to pin hardness ratio (H(d)/H(p)) was used as a criterion to establish the nature of surface contact deformation and to determine the wear regime transition. A predominantly elastic or plastic contact, characterized by H(d)/H(p) values higher or lower than one, results in a mild or severe wear regime operation, respectively. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Target region amplification polymorphism (TRAP) markers were used to estimate the genetic similarity (GS) among 53 sugarcane varieties and five species of the Saccharum complex. Seven fixed primers designed from candidate genes involved in sucrose metabolism and three from those involved in drought response metabolism were used in combination with three arbitrary primers. The clustering of the genotypes for sucrose metabolism and drought response were similar, but the GS based on Jaccard`s coefficient changed. The GS based on polymorphism in sucrose genes estimated in a set of 46 Brazilian varieties, all of which belong to the three Brazilian breeding programs, ranged from 0.52 to 0.9, and that based on drought data ranged from 0.44 to 0.95. The results suggest that genetic variability in the evaluated genes was lower in the sucrose metabolism genes than in the drought response metabolism ones.
Resumo:
A new isotherm is proposed here for adsorption of condensable vapors and gases on nonporous materials having type II isotherms according to the Brunauer-Deming-Deming-Teller (BDDH) classification. The isotherm combines the recent molecular-continuum model in the multilayer region, with other widely used models for sub-monolayer coverage, some of which satisfy the requirement of a Henry's law asymptote. The model is successfully tested using isotherm data for nitrogen adsorption on nonporous silica, carbon and alumina, as well as benzene and hexane adsorption on nonporous carbon. Based on the data fits, out of several different alternative choices of model for the monolayer region, the Freundlich and the Unilan models are found to be the most successful when combined with the multilayer model to predict the whole isotherm. The hybrid model is consequently applicable over a wide pressure range. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
The incidence of 21-hydroxylase deficiency (CYP21 D) congenital adrenal hyperplasia (CAH) in Brazil is purportedly one of the highest in the world (1:7,533). However, this information is not based on official data. The aim of this study was to determine the incidence of CYP21 D CAH in the state of Goias, Brazil, based on the 2005 results of government-funded mandatory screening. Of the live births during this period, 92.95% were screened by heel-prick capillary 17 alpha-hydroxyprogesterone (17-OHP). Of these, 82,343 were normal, 28 were at high risk for CAH and 232 at low risk for CAH. Eight cases, all from the high risk group, were confirmed. Eight asymptomatic children at 6-18 months of age still have high 17-OHP levels and await diagnostic definition. Based on the number of confirmed CYP21 D CAH cases among the 82,603 screened, the estimated annual incidence of the disease was 1:10,325, lower than the previously reported rate in Brazil.
Resumo:
Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.
Resumo:
The prognostic significance of positive peritoneal cytology in endometrial carcinoma has led to the incorporation of peritoneal cytology into the current FIGO staging system, While cytology was shown to be prognostically relevant in patients with stage II and III disease, conflicting data exists about its significance in patients who would have been stage I but were classified as stage III solely and exclusively on the basis of positive peritoneal cytology (clinical stage I). Analysis was based on the data of 369 consecutive patients with clinical stage I endometrioid adenocarcinoma of the endometrium. Standard treatment consisted of an abdominal total hysterectomy, bilateral salpingo-oophorectomy with or without pelvic lymph node dissection. Peritoneal cytology was obtained at laparotomy by peritoneal washing of the pouch of Douglas and was considered positive if malignant cells could be detected regardless of the number of malignant cells present. Disease-free survival (DFS) was considered the primary statistical endpoint. In 13/369 (3.5%) patients, positive peritoneal cytology was found. The median follow-up was 29 months and 15 recurrences occurred. Peritoneal cytology was independent of the depth of myometrial invasion and the grade of tumour differentiation, Patients with negative washings had a DFS of 96'7e at 36 months compared with 67% for patients with positive washings (log-rank P < 0.001). The presence of positive peritoneal cytology in patients with clinically stage I endometrioid adenocarcinoma of the endometrium is considered an adverse prognostic factor. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Wireless medical systems are comprised of four stages, namely the medical device, the data transport, the data collection and the data evaluation stages. Whereas the performance of the first stage is highly regulated, the others are not. This paper concentrates on the data transport stage and argues that it is necessary to establish standardized tests to be used by medical device manufacturers to provide comparable results concerning the communication performance of the wireless networks used to transport medical data. Besides, it suggests test parameters and procedures to be used to produce comparable communication performance results.
Resumo:
Business Intelligence (BI) is one emergent area of the Decision Support Systems (DSS) discipline. Over the last years, the evolution in this area has been considerable. Similarly, in the last years, there has been a huge growth and consolidation of the Data Mining (DM) field. DM is being used with success in BI systems, but a truly DM integration with BI is lacking. Therefore, a lack of an effective usage of DM in BI can be found in some BI systems. An architecture that pretends to conduct to an effective usage of DM in BI is presented.
Resumo:
This paper deals with the establishment of a characterization methodology of electric power profiles of medium voltage (MV) consumers. The characterization is supported on the data base knowledge discovery process (KDD). Data Mining techniques are used with the purpose of obtaining typical load profiles of MV customers and specific knowledge of their customers’ consumption habits. In order to form the different customers’ classes and to find a set of representative consumption patterns, a hierarchical clustering algorithm and a clustering ensemble combination approach (WEACS) are used. Taking into account the typical consumption profile of the class to which the customers belong, new tariff options were defined and new energy coefficients prices were proposed. Finally, and with the results obtained, the consequences that these will have in the interaction between customer and electric power suppliers are analyzed.
Resumo:
Presently power system operation produces huge volumes of data that is still treated in a very limited way. Knowledge discovery and machine learning can make use of these data resulting in relevant knowledge with very positive impact. In the context of competitive electricity markets these data is of even higher value making clear the trend to make data mining techniques application in power systems more relevant. This paper presents two cases based on real data, showing the importance of the use of data mining for supporting demand response and for supporting player strategic behavior.
Resumo:
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.
Resumo:
Tese de Doutoramento, Geografia (Ordenamento do Território), 25 de Novembro de 2013, Universidade dos Açores.