14 resultados para Electronic data interchange

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: This paper examines the various factors that contribute to the occurrence of sleep alterations during peri and post climacteric and thus produce significant imperil to women's quality of life. Among the probable causes of insomnia or sleep disorders associated to climacteric stand out the occurrence of vasomotor symptoms, depressive state and respiratory distress during sleep, such as sleep apnea, along with chronic pain, although psychosocial factors related to the climacteric bear major influence on such clinical status. Method: The bibliographic analysis was carried out using several electronic data base namely: Cochrane, Medline, Embase, Bni Plus, Biological Abstracts, Psycinfo, Web Of Science, Sigle, Dissertation Abstracts and ZETOC published in English, Spanish and Poruguese. The key terms used were: sleep, REM sleep, slow wave sleep polysomnography; electroencephalogram; sleep disturbances; disturbances of sleep onset and maintenance; excessive somnolence disturbances; climacteric; menopause; depression; neurobiology; biologic models; circadian rhythm; mental health and epidemiology. Case studies and letters to the editor were excluded. The summaries of the identified studies found in the data base were analyzed and assessed, and the data analyzed separately according to the subjective or objective criteria for data collection. Results: The climacteric transition constitutes a period of major risk for the development of depressive, vasomotor and insomnia symptoms although not caused solely by hypoestrogenism. The diagnostic methods used in the study of sleep disorders range from subjective assessment by means of response to specific questionnaires to the objective analysis of actigraphic or polissonographic daytime and nocturnal reports. Polissonographic studies of the whole night, performed at the laboratory, are the golden method of choice for diagnostic of sleep disorders. Studies point to the high prevalence of sleep disorders in the climacteric, especially insomnia, apnea and periodic movement of legs and also to the fact that this phase of life presents decrease in the quality of sleep. Women in peri and post climacteric show higher sleep latency and difficulty in its maintenance and refer being less satisfied with its quality even when compared to those who are not climacteric. Exception made to the vasomotor symptomatology, the other climacteric complaints such as mood disturbances, libido alterations, cognitive deficit, articular pain and sleep disorders are markedly associated to psychosocial factors, lifestyle and especially to women's perception of what the climacteric means to their lives. Conclusion: The analysis of the available studies revealed a proneness to deterioration of quality of life of climacteric women markedly in the sleep disturbances, depressed mood and anxiety domains and should not to be basically attributed to the climacteric. It is necessary that the professionals consider the need of assessment of such pathologies as complex phenomena and the literature lacks studies contemplating such dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attributes describing a data set may often be arranged in meaningful subsets, each of which corresponds to a different aspect of the data. An unsupervised algorithm (SCAD) that simultaneously performs fuzzy clustering and aspects weighting was proposed in the literature. However, SCAD may fail and halt given certain conditions. To fix this problem, its steps are modified and then reordered to reduce the number of parameters required to be set by the user. In this paper we prove that each step of the resulting algorithm, named ASCAD, globally minimizes its cost-function with respect to the argument being optimized. The asymptotic analysis of ASCAD leads to a time complexity which is the same as that of fuzzy c-means. A hard version of the algorithm and a novel validity criterion that considers aspect weights in order to estimate the number of clusters are also described. The proposed method is assessed over several artificial and real data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electronic stopping cross section (SCS) of Al2O3 for proton beams is studied both experimentally and theoretically. The measurements are made for proton energies from 40 keV up to 1 MeV, which cover the maximum stopping region, using two experimental methods, the transmission technique at low energies (similar to 40-175 keV) and the Rutherford backscattering at high energies (approximate to 190-1000 keV). These new data reveal an increment of 16% in the SCS around the maximum stopping with respect to older measurements. The theoretical study includes electronic stopping power calculations based on the dielectric formalism and on the transport cross section (TCS) model to describe the electron excitations of Al2O3. The non-linear TCS calculations of the SCS for valence electrons together with the generalized oscillator strengths (GOS) model for the core electrons compare well with the experimental data in the whole range of energies considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IDENTIFICATION OF ETHANOLIC WOOD EXTRACTS USING ELECTRONIC ABSORPTION SPECTRUM AND MULTIVARIATE ANALYSIS. The application of multivariate analysis to spectrophotometric (UV) data was explored for distinguishing extracts of cachaca woods commonly used in the manufacture of casks for aging cachacas (oak, cabretiva-parda, jatoba, amendoim and canela-sassafras). Absorbances close to 280 nm were more strongly correlated with oak and jatoba woods, whereas absorbances near 230 nm were more correlated with canela-sassafras and cabretiva-parda. A comparison between the spectrophotometric model and the model based on chromatographic (HPLC-DAD) data was carried out. The spectrophotometric model better explained the variance data (PC1 + PC2 = 91%) exhibiting potential as a routine method for checking aged spirits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveillance Levels (SLs) are categories for medical patients (used in Brazil) that represent different types of medical recommendations. SLs are defined according to risk factors and the medical and developmental history of patients. Each SL is associated with specific educational and clinical measures. The objective of the present paper was to verify computer-aided, automatic assignment of SLs. The present paper proposes a computer-aided approach for automatic recommendation of SLs. The approach is based on the classification of information from patient electronic records. For this purpose, a software architecture composed of three layers was developed. The architecture is formed by a classification layer that includes a linguistic module and machine learning classification modules. The classification layer allows for the use of different classification methods, including the use of preprocessed, normalized language data drawn from the linguistic module. We report the verification and validation of the software architecture in a Brazilian pediatric healthcare institution. The results indicate that selection of attributes can have a great effect on the performance of the system. Nonetheless, our automatic recommendation of surveillance level can still benefit from improvements in processing procedures when the linguistic module is applied prior to classification. Results from our efforts can be applied to different types of medical systems. The results of systems supported by the framework presented in this paper may be used by healthcare and governmental institutions to improve healthcare services in terms of establishing preventive measures and alerting authorities about the possibility of an epidemic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Primary Care Information System (SIAB) concentrates basic healthcare information from all different regions of Brazil. The information is collected by primary care teams on a paper-based procedure that degrades the quality of information provided to the healthcare authorities and slows down the process of decision making. To overcome these problems we propose a new data gathering application that uses a mobile device connected to a 3G network and a GPS to be used by the primary care teams for collecting the families' data. A prototype was developed in which a digital version of one SIAB form is made available at the mobile device. The prototype was tested in a basic healthcare unit located in a suburb of Sao Paulo. The results obtained so far have shown that the proposed process is a better alternative for data collecting at primary care, both in terms of data quality and lower deployment time to health care authorities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interoperability is a crucial issue for electronic government due to the need of agencies' information systems to be totally integrated and able to exchange data in a seamless way. A way to achieve it is by establishing a government interoperability framework (GIF). However, this is a difficult task to be carried out due not only to technological issues but also to other aspects. This research is expected to contribute to the identification of the barriers to the adoption of interoperability standards for electronic government. The article presents the preliminary findings from a case study of the Brazilian Government framework (e-PING), based on the analyses of documents and face-to-face interviews. It points out some aspects that may influence the establishment of these standards, becoming barriers to their adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2-Methylisoborneol (MIB) and geosmin (GSM) are sub products from algae decomposition and, depending on their concentration, can be toxic: otherwise, they give unpleasant taste and odor to water. For water treatment companies it is important to constantly monitor their presence in the distributed water and avoid further costumer complaints. Lower-cost and easy-to-read instrumentation would be very promising in this regard. In this study, we evaluate the potentiality of an electronic tongue (ET) system based on non-specific polymeric sensors and impedance measurements in monitoring MIB and GSM in water samples. Principal component analysis (PCA) applied to the generated data matrix indicated that this ET was capable to perform with remarkable reproducibility the discrimination of these two contaminants in either distilled or tap water, in concentrations as low as 25 ng L-1. Nonetheless, this analysis methodology was rather qualitative and laborious, and the outputs it provided were greatly subjective. Also, data analysis based on PCA severely restricts automation of the measuring system or its use by non-specialized operators. To circumvent these drawbacks, a fuzzy controller was designed to quantitatively perform sample classification while providing outputs in simpler data charts. For instance, the ET along with the referred fuzzy controller performed with a 100% hit rate the quantification of MIB and GSM samples in distilled and tap water. The hit rate could be read directly from the plot. The lower cost of these polymeric sensors allied to the especial features of the fuzzy controller (easiness on programming and numerical outputs) provided initial requirements for developing an automated ET system to monitor odorant species in water production and distribution. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of the utility harmonic impedance based on measurements is a significant task for utility power-quality improvement and management. Compared to those well-established, accurate invasive methods, the noninvasive methods are more desirable since they work with natural variations of the loads connected to the point of common coupling (PCC), so that no intentional disturbance is needed. However, the accuracy of these methods has to be improved. In this context, this paper first points out that the critical problem of the noninvasive methods is how to select the measurements that can be used with confidence for utility harmonic impedance calculation. Then, this paper presents a new measurement technique which is based on the complex data-based least-square regression, combined with two techniques of data selection. Simulation and field test results show that the proposed noninvasive method is practical and robust so that it can be used with confidence to determine the utility harmonic impedances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate, ex vivo, the precision of five electronic root canal length measurement devices (ERCLMDs) with different operating systems: the Root ZX, Mini Apex Locator, Propex II, iPex, and RomiApex A-15, and the possible influence of the positioning of the instrument tips short of the apical foramen. Material and Methods: Forty-two mandibular bicuspids had their real canal lengths (RL) previously determined. Electronic measurements were performed 1.0 mm short of the apical foramen (-1.0), followed by measurements at the apical foramen (0.0). The data resulting from the comparison of the ERCLMD measurements and the RL were evaluated by the Wilcoxon and Friedman tests at a significance level of 5%. Results: Considering the measurements performed at 0.0 and -1.0, the precision rates for the ERCLMDs were: 73.5% and 47.1% (Root ZX), 73.5% and 55.9% (Mini Apex Locator), 67.6% and 41.1% (Propex II), 61.7% and 44.1% (iPex), and 79.4% and 44.1% (RomiApex A-15), respectively, considering ±0.5 mm of tolerance. Regarding the mean discrepancies, no differences were observed at 0.0; however, in the measurements at -1.0, the iPex, a multi-frequency ERCLMD, had significantly more discrepant readings short of the apical foramen than the other devices, except for the Propex II, which had intermediate results. When the ERCLMDs measurements at -1.0 were compared with those at 0.0, the Propex II, iPex and RomiApex A-15 presented significantly higher discrepancies in their readings. Conclusions: Under the conditions of the present study, all the ERCLMDs provided acceptable measurements at the 0.0 position. However, at the -1.0 position, the ERCLMDs had a lower precision, with statistically significant differences for the Propex II, iPex, and RomiApex A-15.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.