7 resultados para Data-driven Methods
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Objective. To define inactive disease (ID) and clinical remission (CR) and to delineate variables that can be used to measure ID/CR in childhood-onset systemic lupus erythematosus (cSLE). Methods. Delphi questionnaires were sent to an international group of pediatric rheumatologists. Respondents provided information about variables to be used in future algorithms to measure ID/CR. The usefulness of these variables was assessed in 35 children with ID and 31 children with minimally active lupus (MAL). Results. While ID reflects cSLE status at a specific point in time, CR requires the presence of ID for >6 months and considers treatment. There was consensus that patients in ID/CR can have <2 mild nonlimiting symptoms (i.e., fatigue, arthralgia, headaches, or myalgia) but not Raynaud's phenomenon, chest pain, or objective physical signs of cSLE; antinuclear antibody positivity and erythrocyte sedimentation rate elevation can be present. Complete blood count, renal function testing, and complement C3 all must be within the normal range. Based on consensus, only damage-related laboratory or clinical findings of cSLE are permissible with ID. The above parameters were suitable to differentiate children with ID/CR from those with MAL (area under the receiver operating characteristic curve >0.85). Disease activity scores with or without the physician global assessment of disease activity and patient symptoms were well suited to differentiate children with ID from those with MAL. Conclusion. Consensus has been reached on common definitions of ID/CR with cSLE and relevant patient characteristics with ID/CR. Further studies must assess the usefulness of the data-driven candidate criteria for ID in cSLE.
Resumo:
Context. The ESO public survey VISTA variables in the Via Lactea (VVV) started in 2010. VVV targets 562 sq. deg in the Galactic bulge and an adjacent plane region and is expected to run for about five years. Aims. We describe the progress of the survey observations in the first observing season, the observing strategy, and quality of the data obtained. Methods. The observations are carried out on the 4-m VISTA telescope in the ZYJHK(s) filters. In addition to the multi-band imaging the variability monitoring campaign in the K-s filter has started. Data reduction is carried out using the pipeline at the Cambridge Astronomical Survey Unit. The photometric and astrometric calibration is performed via the numerous 2MASS sources observed in each pointing. Results. The first data release contains the aperture photometry and astrometric catalogues for 348 individual pointings in the ZYJHK(s) filters taken in the 2010 observing season. The typical image quality is similar to 0 ''.9-1 ''.0. The stringent photometric and image quality requirements of the survey are satisfied in 100% of the JHK(s) images in the disk area and 90% of the JHK(s) images in the bulge area. The completeness in the Z and Y images is 84% in the disk, and 40% in the bulge. The first season catalogues contain 1.28 x 10(8) stellar sources in the bulge and 1.68 x 10(8) in the disk area detected in at least one of the photometric bands. The combined, multi-band catalogues contain more than 1.63 x 10(8) stellar sources. About 10% of these are double detections because of overlapping adjacent pointings. These overlapping multiple detections are used to characterise the quality of the data. The images in the JHK(s) bands extend typically similar to 4 mag deeper than 2MASS. The magnitude limit and photometric quality depend strongly on crowding in the inner Galactic regions. The astrometry for K-s = 15-18 mag has rms similar to 35-175 mas. Conclusions. The VVV Survey data products offer a unique dataset to map the stellar populations in the Galactic bulge and the adjacent plane and provide an exciting new tool for the study of the structure, content, and star-formation history of our Galaxy, as well as for investigations of the newly discovered star clusters, star-forming regions in the disk, high proper motion stars, asteroids, planetary nebulae, and other interesting objects.
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
Resumo:
Background: The CUPID (Cultural and Psychosocial Influences on Disability) study was established to explore the hypothesis that common musculoskeletal disorders (MSDs) and associated disability are importantly influenced by culturally determined health beliefs and expectations. This paper describes the methods of data collection and various characteristics of the study sample. Methods/Principal Findings: A standardised questionnaire covering musculoskeletal symptoms, disability and potential risk factors, was used to collect information from 47 samples of nurses, office workers, and other (mostly manual) workers in 18 countries from six continents. In addition, local investigators provided data on economic aspects of employment for each occupational group. Participation exceeded 80% in 33 of the 47 occupational groups, and after pre-specified exclusions, analysis was based on 12,426 subjects (92 to 1018 per occupational group). As expected, there was high usage of computer keyboards by office workers, while nurses had the highest prevalence of heavy manual lifting in all but one country. There was substantial heterogeneity between occupational groups in economic and psychosocial aspects of work; three-to fivefold variation in awareness of someone outside work with musculoskeletal pain; and more than ten-fold variation in the prevalence of adverse health beliefs about back and arm pain, and in awareness of terms such as "repetitive strain injury" (RSI). Conclusions/Significance: The large differences in psychosocial risk factors (including knowledge and beliefs about MSDs) between occupational groups should allow the study hypothesis to be addressed effectively.
Resumo:
In this paper we discuss the detection of glucose and triglycerides using information visualization methods to process impedance spectroscopy data. The sensing units contained either lipase or glucose oxidase immobilized in layer-by-layer (LbL) films deposited onto interdigitated electrodes. The optimization consisted in identifying which part of the electrical response and combination of sensing units yielded the best distinguishing ability. It is shown that complete separation can be obtained for a range of concentrations of glucose and triglyceride when the interactive document map (IDMAP) technique is used to project the data into a two-dimensional plot. Most importantly, the optimization procedure can be extended to other types of biosensors, thus increasing the versatility of analysis provided by tailored molecular architectures exploited with various detection principles. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.