935 resultados para Electronic data processing -- Quality control
Resumo:
Environmental management is a complex task. The amount and heterogeneity of the data needed for an environmental decision making tool is overwhelming without adequate database systems and innovative methodologies. As far as data management, data interaction and data processing is concerned we here propose the use of a Geographical Information System (GIS) whilst for the decision making we suggest a Multi-Agent System (MAS) architecture. With the adoption of a GIS we hope to provide a complementary coexistence between heterogeneous data sets, a correct data structure, a good storage capacity and a friendly user’s interface. By choosing a distributed architecture such as a Multi-Agent System, where each agent is a semi-autonomous Expert System with the necessary skills to cooperate with the others in order to solve a given task, we hope to ensure a dynamic problem decomposition and to achieve a better performance compared with standard monolithical architectures. Finally, and in view of the partial, imprecise, and ever changing character of information available for decision making, Belief Revision capabilities are added to the system. Our aim is to present and discuss an intelligent environmental management system capable of suggesting the more appropriate land-use actions based on the existing spatial and non-spatial constraints.
Resumo:
This study identifies predictors and normative data for quality of life (QOL) in a sample of Portuguese adults from general population. A cross-sectional correlational study was undertaken with two hundred and fifty-five (N = 255) individuals from Portuguese general population (mean age 43 years, range 25–84 years; 148 females, 107 males). Participants completed the European Portuguese version of the World Health Organization Quality of Life short-form instrument and the European Portuguese version of the Center for Epidemiologic Studies Depression Scale. Demographic information was also collected. Portuguese adults reported their QOL as good. The physical, psychological and environmental domains predicted 44 % of the variance of QOL. The strongest predictor was the physical domain and the weakest was social relationships. Age, educational level, socioeconomic status and emotional status were significantly correlated with QOL and explained 25 % of the variance of QOL. The strongest predictor of QOL was emotional status followed by education and age. QOL was significantly different according to: marital status; living place (mainland or islands); type of cohabitants; occupation; health. The sample of adults from general Portuguese population reported high levels of QOL. The life domain that better explained QOL was the physical domain. Among other variables, emotional status best predicted QOL. Further variables influenced overall QOL. These findings inform our understanding on adults from Portuguese general population QOL and can be helpful for researchers and practitioners using this assessment tool to compare their results with normative data
Resumo:
Os osciloscópios digitais são utilizados em diversas áreas do conhecimento, assumindo-se no âmbito da engenharia electrónica, como instrumentos indispensáveis. Graças ao advento das Field Programmable Gate Arrays (FPGAs), os instrumentos de medição reconfiguráveis, dadas as suas vantagens, i.e., altos desempenhos, baixos custos e elevada flexibilidade, são cada vez mais uma alternativa aos instrumentos tradicionalmente usados nos laboratórios. Tendo como objectivo a normalização no acesso e no controlo deste tipo de instrumentos, esta tese descreve o projecto e implementação de um osciloscópio digital reconfigurável baseado na norma IEEE 1451.0. Definido de acordo com uma arquitectura baseada nesta norma, as características do osciloscópio são descritas numa estrutura de dados denominada Transducer Electronic Data Sheet (TEDS), e o seu controlo é efectuado utilizando um conjunto de comandos normalizados. O osciloscópio implementa um conjunto de características e funcionalidades básicas, todas verificadas experimentalmente. Destas, destaca-se uma largura de banda de 575kHz, um intervalo de medição de 0.4V a 2.9V, a possibilidade de se definir um conjunto de escalas horizontais, o nível e declive de sincronismo e o modo de acoplamento com o circuito sob análise. Arquitecturalmente, o osciloscópio é constituído por um módulo especificado com a linguagem de descrição de hardware (HDL, Hardware Description Language) Verilog e por uma interface desenvolvida na linguagem de programação Java®. O módulo é embutido numa FPGA, definindo todo o processamento do osciloscópio. A interface permite o seu controlo e a representação do sinal medido. Durante o projecto foi utilizado um conversor Analógico/Digital (A/D) com uma frequência máxima de amostragem de 1.5MHz e 14 bits de resolução que, devido às suas limitações, obrigaram à implementação de um sistema de interpolação multi-estágio com filtros digitais.
Resumo:
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
O controlo da qualidade em ressonância magnética (RM) passa pela realização de diversos testes ao equipamento e calibrações diárias, onde os fantomas desempenham um papel fundamental. Este trabalho teve como objetivo principal o desenvolvimento de um fantoma cerebral para um sistema de RM de intensidade 3.0 Tesla. Com base na literatura existente, escolheram-se como reagentes o cloreto de gadolínio (III) (GdCl3), a agarose, e o gelificante carragena, tendo sido ainda acrescentado o conservante químico azida de sódio (NaN3) de forma a inibir a degradação da solução. Realizaram-se vários testes com diferentes concentrações dos materiais selecionados até obter as misturas adequadas a suscetibilidade magnética das substâncias branca e cinzenta cerebrais. Os tempos de relaxação T1 das diversas substâncias desenvolvidas foram medidos, apresentando o fantoma final uns tempos de T1 de 702±10 ms, quando a concentração de GdCl3 foi de 100 µmol (substância branca) e 1179±23 ms quando a concentração foi de 15 µmol (substância cinzenta). Os valores de T1 do fantoma foram comparados estatisticamente com os tempos de relaxação conseguidos a partir de um cérebro humano, obtendo-se uma correlação de 0.867 com significância estatística. No intuito de demonstrar a aplicabilidade do fantoma, este foi sujeito a um protocolo de RM, do qual constaram as sequências habitualmente usadas no estudo cerebral. Como principais resultados constatou-se que, nas sequências ponderadas em T1, o fantoma apresenta uma forte associação positiva (rs > 0.700 p = 0.072) com o cérebro de referência, ainda que não sejam estatisticamente significativos. As sequências ponderadas em T2 demonstraram uma correlação positiva moderada e fraca, sendo a ponderação densidade protónica a única a apresentar uma associação negativa. Desta forma, o fantoma revelou-se um ótimo substituto do cérebro humano. Este trabalho culminou na criação de um modelo cerebral tridimensional onde foram individualizadas as regiões das substâncias branca e cinzenta, de forma a posteriormente serem preenchidas pelas correspondentes substâncias desenvolvidas, obtendo-se um fantoma cerebral antropomórfico.
Resumo:
Foodborne diseases represent operational risks in industrial restaurants. We described an outbreak of nine clustered cases of acute illness resembling acute toxoplasmosis in an industrial plant with 2300 employees. These patients and another 36 similar asymptomatic employees were diagnosed with anti-T. gondii IgG titer and avidity by ELISA. We excluded 14 patients based on high IgG avidity and chronic toxoplasmosis: 13 from controls and one from acute disease other than T. gondii infection. We also identified another three asymptomatic employees with T.gondii acute infection and also anti-T. gondii IgM positive as remaining acute cases. Case control study was conducted by interview in 11 acute infections and 20 negative controls. The ingestion of green vegetables, but not meat or water, was observed to be associated with the incidence of acute disease. These data reinforce the importance of sanitation control in industrial restaurants and also demonstrate the need for improvement in quality control regarding vegetables at risk for T. gondii oocyst contamination. We emphasized the accurate diagnosis of indexed cases and the detection of asymptomatic infections to determine the extent of the toxoplasmosis outbreak.
Resumo:
Dissertação para obtenção do Grau de Doutor em Biologia, Especialidade de Biologia Molecular
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Abstract: INTRODUCTION: Before 2004, the occurrence of acute Chagas disease (ACD) by oral transmission associated with food was scarcely known or investigated. Originally sporadic and circumstantial, ACD occurrences have now become frequent in the Amazon region, with recently related outbreaks spreading to several Brazilian states. These cases are associated with the consumption of açai juice by waste reservoir animals or insect vectors infected with Trypanosoma cruzi in endemic areas. Although guidelines for processing the fruit to minimize contamination through microorganisms and parasites exist, açai-based products must be assessed for quality, for which the demand for appropriate methodologies must be met. METHODS: Dilutions ranging from 5 to 1,000 T. cruzi CL Brener cells were mixed with 2mL of acai juice. Four Extraction of T. cruzi DNA methods were used on the fruit, and the cetyltrimethyl ammonium bromide (CTAB) method was selected according to JRC, 2005. RESULTS: DNA extraction by the CTAB method yielded satisfactory results with regard to purity and concentration for use in PCR. Overall, the methods employed proved that not only extraction efficiency but also high sensitivity in amplification was important. CONCLUSIONS: The method for T. cruzi detection in food is a powerful tool in the epidemiological investigation of outbreaks as it turns epidemiological evidence into supporting data that serve to confirm T. cruzi infection in the foods. It also facilitates food quality control and assessment of good manufacturing practices involving acai-based products.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
Dissertação de mestrado em Engenharia e Gestão da Qualidade
Resumo:
BACKGROUND: This study describes the prevalence, associated anomalies, and demographic characteristics of cases of multiple congenital anomalies (MCA) in 19 population-based European registries (EUROCAT) covering 959,446 births in 2004 and 2010. METHODS: EUROCAT implemented a computer algorithm for classification of congenital anomaly cases followed by manual review of potential MCA cases by geneticists. MCA cases are defined as cases with two or more major anomalies of different organ systems, excluding sequences, chromosomal and monogenic syndromes. RESULTS: The combination of an epidemiological and clinical approach for classification of cases has improved the quality and accuracy of the MCA data. Total prevalence of MCA cases was 15.8 per 10,000 births. Fetal deaths and termination of pregnancy were significantly more frequent in MCA cases compared with isolated cases (p < 0.001) and MCA cases were more frequently prenatally diagnosed (p < 0.001). Live born infants with MCA were more often born preterm (p < 0.01) and with birth weight < 2500 grams (p < 0.01). Respiratory and ear, face, and neck anomalies were the most likely to occur with other anomalies (34% and 32%) and congenital heart defects and limb anomalies were the least likely to occur with other anomalies (13%) (p < 0.01). However, due to their high prevalence, congenital heart defects were present in half of all MCA cases. Among males with MCA, the frequency of genital anomalies was significantly greater than the frequency of genital anomalies among females with MCA (p < 0.001). CONCLUSION: Although rare, MCA cases are an important public health issue, because of their severity. The EUROCAT database of MCA cases will allow future investigation on the epidemiology of these conditions and related clinical and diagnostic problems.
Resumo:
Assays that measure a patient's immune response play an increasingly important role in the development of immunotherapies. The inherent complexity of these assays and independent protocol development between laboratories result in high data variability and poor reproducibility. Quality control through harmonization--based on integration of laboratory-specific protocols with standard operating procedures and assay performance benchmarks--is one way to overcome these limitations. Harmonization guidelines can be widely implemented to address assay performance variables. This process enables objective interpretation and comparison of data across clinical trial sites and also facilitates the identification of relevant immune biomarkers, guiding the development of new therapies.
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
Type 2 diabetes mellitus (T2DM) is a major disease affecting nearly 280 million people worldwide. Whilst the pathophysiological mechanisms leading to disease are poorly understood, dysfunction of the insulin-producing pancreatic beta-cells is key event for disease development. Monitoring the gene expression profiles of pancreatic beta-cells under several genetic or chemical perturbations has shed light on genes and pathways involved in T2DM. The EuroDia database has been established to build a unique collection of gene expression measurements performed on beta-cells of three organisms, namely human, mouse and rat. The Gene Expression Data Analysis Interface (GEDAI) has been developed to support this database. The quality of each dataset is assessed by a series of quality control procedures to detect putative hybridization outliers. The system integrates a web interface to several standard analysis functions from R/Bioconductor to identify differentially expressed genes and pathways. It also allows the combination of multiple experiments performed on different array platforms of the same technology. The design of this system enables each user to rapidly design a custom analysis pipeline and thus produce their own list of genes and pathways. Raw and normalized data can be downloaded for each experiment. The flexible engine of this database (GEDAI) is currently used to handle gene expression data from several laboratory-run projects dealing with different organisms and platforms. Database URL: http://eurodia.vital-it.ch.