975 resultados para STATISTICAL TESTS
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
As operações de separação por adsorção têm vindo a ganhar importância nos últimos anos, especialmente com o desenvolvimento de técnicas de simulação de leitos móveis em colunas, tal como a cromatografia de Leito Móvel Simulado (Simulated Moving Bed, SMB). Esta tecnologia foi desenvolvida no início dos anos 60 como método alternativo ao processo de Leito Móvel Verdadeiro (True Moving Bed, TMB), de modo a resolver vários dos problemas associados ao movimento da fase sólida, usuais nestes métodos de separação cromatográficos de contracorrente. A tecnologia de SMB tem sido amplamente utilizada em escala industrial principalmente nas indústrias petroquímica e de transformação de açúcares e, mais recentemente, na indústria farmacêutica e de química fina. Nas últimas décadas, o crescente interesse na tecnologia de SMB, fruto do alto rendimento e eficiente consumo de solvente, levou à formulação de diferentes modos de operação, ditos não convencionais, que conseguem unidades mais flexíveis, capazes de aumentar o desempenho de separação e alargar ainda mais a gama de aplicação da tecnologia. Um dos exemplos mais estudados e implementados é o caso do processo Varicol, no qual se procede a um movimento assíncrono de portas. Neste âmbito, o presente trabalho foca-se na simulação, análise e avaliação da tecnologia de SMB para dois casos de separação distintos: a separação de uma mistura de frutose-glucose e a separação de uma mistura racémica de pindolol. Para ambos os casos foram considerados e comparados dois modos de operação da unidade de SMB: o modo convencional e o modo Varicol. Desta forma, foi realizada a implementação e simulação de ambos os casos de separação no simulador de processos Aspen Chromatography, mediante a utilização de duas unidades de SMB distintas (SMB convencional e SMB Varicol). Para a separação da mistura frutose-glucose, no quediz respeito à modelização da unidade de SMB convencional, foram utilizadas duas abordagens: a de um leito móvel verdadeiro (modelo TMB) e a de um leito móvel simulado real (modelo SMB). Para a separação da mistura racémica de pindolol foi considerada apenas a modelização pelo modelo SMB. No caso da separação da mistura frutose-glucose, procedeu-se ainda à otimização de ambas as unidades de SMB convencional e Varicol, com o intuito do aumento das suas produtividades. A otimização foi realizada mediante a aplicação de um procedimento de planeamento experimental, onde as experiências foram planeadas, conduzidas e posteriormente analisadas através da análise de variância (ANOVA). A análise estatística permitiu selecionar os níveis dos fatores de controlo de modo a obter melhores resultados para ambas as unidades de SMB.
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente
Resumo:
This paper presents a layered Smart Grid architecture enhancing security and reliability, having the ability to act in order to maintain and correct infrastructure components without affecting the client service. The architecture presented is based in the core of well design software engineering, standing upon standards developed over the years. The layered Smart Grid offers a base tool to ease new standards and energy policies implementation. The ZigBee technology implementation test methodology for the Smart Grid is presented, and provides field tests using ZigBee technology to control the new Smart Grid architecture approach. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Micro-abrasion wear tests with ball-cratering configuration are widely used. Sources of variability are already studied by different authors and conditions for testing are parameterized by BS EN 1071-6: 2007 standard which refers silicon carbide as abrasive. However, the use of other abrasives is possible and allowed. In this work, ball-cratering wear tests were performed using four different abrasive particles of three dissimilar materials: diamond, alumina and silicon carbide. Tests were carried out under the same conditions on a steel plate provided with TiB2 hard coating. For each abrasive, five different test durations were used allowing understanding the initial wear phenomena. Composition and shape of abrasive particles were investigated by SEM and EDS. Scar areas were observed by optical and electronic microscopy in order to understand the wear effects caused by each of them. Scar geometry and grooves were analyzed and compared. Wear coefficient was calculated for each situation. It was observed that diamond particles produce well-defined and circular wear scars. Different silicon carbide particles presented dissimilar results as consequence of distinct particle shape and size distribution.
Resumo:
Purpose: To determine whether using different combinations of kVp and mAs with additional filtration can reduce the effective dose to a paediatric phantom whilst maintaining diagnostic image quality. Methods: 27 images of a paediatric AP pelvis phantom were acquired with different kVp, mAs and additional copper filtration. Images were displayed on quality controlled monitors with dimmed lighting. Ten diagnostic radiographers (5 students and 5 experienced radiographers) had eye tests to assess visual acuity before rating the images. Each image was rated for visual image quality against a reference image using 2 alternative forced choice software using a 5-point Likert scale. Physical measures (SNR and CNR) were also taken to assess image quality. Results: Of the 27 images rated, 13 of them were of acceptable image quality and had a dose lower than the image with standard acquisition parameters. Two were produced without filtration, 6 with 0.1mm and 5 with 0.2mm copper filtration. Statistical analysis found that the inter-rater and intra-rater reliability was high. Discussion: It is possible to obtain an image of acceptable image quality with a dose that is lower than published guidelines. There are some areas of the study that could be improved. These include using a wider range of kVp and mAs to give an exact set of parameters to use. Conclusion: Additional filtration has been identified as amajor tool for reducing effective dose whilst maintaining acceptable image quality in a 5 year old phantom.
Resumo:
Em termos profissionais, a satisfação no trabalho é sem dúvida um tema bastante debatido e atual. Neste sentido, este estudo teve como objetivo analisar a satisfação profissional dos enfermeiros especialistas em enfermagem de reabilitação e também averiguar se o local de trabalho ou o exercício de cuidados de especialidade influenciam a satisfação profissional deste grupo de enfermeiros. A satisfação profissional foi avaliada através da aplicação da “Escala de Avaliação da Satisfação no Trabalho dos Enfermeiros”, construída e validada por Frederico & Loureiro (2009) a 306 profissionais de Enfermagem, especialistas em enfermagem de reabilitação. O Alpha de Cronbach foi de 0,85, refletindo um bom nível de consistência interna. Foram analisadas seis dimensões da satisfação: satisfação com as chefias; satisfação com benefícios e recompensas; satisfação com a natureza do trabalho; satisfação com a comunicação; satisfação com a equipa e satisfação com os requisitos do trabalho. Foi realizado um estudo transversal analítico. Na análise estatística dos dados, recorreu-se à análise fatorial, ao coeficiente de correlação de Spearman, a testes paramétricos t-student para amostras independentes e One-Way ANOVA. Os resultados mostram que os enfermeiros especialistas em enfermagem de reabilitação encontram-se ligeiramente insatisfeitos. Os fatores de insatisfação estão relacionados com benefícios e recompensas, requisitos do trabalho e com a comunicação. A natureza do trabalho e o relacionamento com a equipa são fatores com os quais obtêm maior satisfação. Em relação às chefias existe uma aproximação à satisfação. Relativamente à avaliação do grau de satisfação profissional global, concluiu-se que os enfermeiros que exercem funções nos Cuidados de Saúde Primários encontram-se mais insatisfeitos profissionalmente, do que os que exercem a sua atividade profissional ao nível hospitalar. Os enfermeiros que exercem cuidados gerais apresentam-se mais insatisfeitos do que os que exercem cuidados de especialidade. Também a idade e a remuneração pelo cargo desempenhado são fatores determinantes. Os enfermeiros mais jovens são os que se apresentam mais insatisfeitos. Os enfermeiros que não são remunerados pelo cargo desempenhado demonstram maior insatisfação profissional.
Resumo:
Seventy three children (6-15 years) and 75 adults (18-47 years) with active schistosomiasis mansoni were treated with oltipraz. All cases had at least 100 eggs per gram of feces as determined by the Kato-Katz technique. Children and adults were divided in two groups receiving respectively 25 or 30 mg/kg, as a single oral dose. Clinical examination, laboratories tests (haemogram, urinalysis, hepatic and kidney functions tests, glycemia, cholesterol, triglicerides, lipoprotein HLD and LDL) and ECG were performed before, 3 or 7 days and 1 month after treatment. Parasitological control with 3 daily coprological examinations, was done on the 1st, 3rd j 6th month after drug administration. Giddiness, somnolence, headache, nausea, vomiting and abdominal distress were the most frequent side effects. Pain in the finger tips that need further investigations also occurred. No significant alteration in complementary tests were observed, whereas eosinophilia 1 month after treatment was detected, probably indicating worm death. The cure rate in children was 81.8% and 74.2% with 25 and 30 mg/kg respectively, and in adults 75.0% and 81.2% of the patients. No statistical significant difference was observed between cure rate and side effects at different dosages employed, neither between adults nor children. In all groups the percentage of egg reduction in feces in the non cured patients was higher than 96.0%. Further investigation with this new compound is necessary to accomplish the real value of oltipraz in the schistosomiasis chemotherapy.
Resumo:
Nanocrystalline diamond (NCD) coatings offer an excellent alternative for tribological applications, preserving most of the intrinsic mechanical properties of polycrystalline CVD diamond and adding to it an extreme surface smoothness. Silicon nitride (Si3N4) ceramics are reported to guarantee high adhesion levels to CVD microcrystalline diamond coatings, but the NCD adhesion to Si3N4 is not yet well established. Micro-abrasion tests are appropriate for evaluating the abrasive wear resistance of a given surface, but they also provide information on thin film/substrate interfacial resistance, i.e., film adhesion. In this study, a comparison is made between the behaviour of NCD films deposited by hot-filament chemical vapour deposition (HFCVD) and microwave plasma assisted chemical vapour deposition (MPCVD) techniques. Silicon nitride (Si3N4) ceramic discs were selected as substrates. The NCD depositions by HFCVD and MPCVD were carried out using H2–CH4 and H2–CH4–N2 gas mixtures, respectively. An adequate set of growth parameters was chosen for each CVD technique, resulting in NCD films having a final thickness of 5 m. A micro-abrasion tribometer was used, with 3 m diamond grit as the abrasive slurry element. Experiments were carried out at a constant rotational speed (80 r.p.m.) and by varying the applied load in the range of 0.25–0.75 N. The wear rate for MPCVD NCD (3.7±0.8 × 10−5 m3N−1m−1) is compatible with those reported for microcrystalline CVD diamond. The HFCVD films displayed poorer adhesion to the Si3N4 ceramic substrates than the MPCVD ones. However, the HFCVD films show better wear resistance as a result of their higher crystallinity according to the UV Raman data, despite evidencing premature adhesion failure.
Resumo:
Ball rotating micro-abrasion tribometers are commonly used to carry out wear tests on thin hard coatings. In these tests, different kinds of abrasives were used, as alumina (Al2O3), silicon carbide (SiC) or diamond. In each kind of abrasive, several particle sizes can be used. Some studies were developed in order to evaluate the influence of the abrasive particle shape in the micro-abrasion process. Nevertheless, the particle size was not well correlated with the material removed amount and wear mechanisms. In this work, slurry of SiC abrasive in distilled water was used, with three different particles size. Initial surface topography was accessed by atomic force microscopy (AFM). Coating hardness measurements were performed with a micro-hardness tester. In order to evaluate the wear behaviour, a TiAlSiN thin hard film was used. The micro-abrasion tests were carried out with some different durations. The abrasive effect of the SiC particles was observed by scanning electron microscopy (SEM) both in the films (hard material) as in the substrate (soft material), after coating perforation. Wear grooves and removed material rate were compared and discussed.
Resumo:
Na atualidade, está a emergir um novo paradigma de interação, designado por Natural User Interface (NUI) para reconhecimento de gestos produzidos com o corpo do utilizador. O dispositivo de interação Microsoft Kinect foi inicialmente concebido para controlo de videojogos, para a consola Xbox360. Este dispositivo demonstra ser uma aposta viável para explorar outras áreas, como a do apoio ao processo de ensino e de aprendizagem para crianças do ensino básico. O protótipo desenvolvido visa definir um modo de interação baseado no desenho de letras no ar, e realizar a interpretação dos símbolos desenhados, usando os reconhecedores de padrões Kernel Discriminant Analysis (KDA), Support Vector Machines (SVM) e $N. O desenvolvimento deste projeto baseou-se no estudo dos diferentes dispositivos NUI disponíveis no mercado, bibliotecas de desenvolvimento NUI para este tipo de dispositivos e algoritmos de reconhecimento de padrões. Com base nos dois elementos iniciais, foi possível obter uma visão mais concreta de qual o hardware e software disponíveis indicados à persecução do objetivo pretendido. O reconhecimento de padrões constitui um tema bastante extenso e complexo, de modo que foi necessária a seleção de um conjunto limitado deste tipo de algoritmos, realizando os respetivos testes por forma a determinar qual o que melhor se adequava ao objetivo pretendido. Aplicando as mesmas condições aos três algoritmos de reconhecimento de padrões permitiu avaliar as suas capacidades e determinar o $N como o que apresentou maior eficácia no reconhecimento. Por último, tentou-se averiguar a viabilidade do protótipo desenvolvido, tendo sido testado num universo de elementos de duas faixas etárias para determinar a capacidade de adaptação e aprendizagem destes dois grupos. Neste estudo, constatou-se um melhor desempenho inicial ao modo de interação do grupo de idade mais avançada. Contudo, o grupo mais jovem foi revelando uma evolutiva capacidade de adaptação a este modo de interação melhorando progressivamente os resultados.
Resumo:
The Casa da Música Foundation, responsible for the management of Casa da Música do Porto building, has the need to obtain statistical data related to the number of building’s visitors. This information is a valuable tool for the elaboration of periodical reports concerning the success of this cultural institution. For this reason it was necessary to develop a system capable of returning the number of visitors for a requested period of time. This represents a complex task due to the building’s unique architectural design, characterized by very large doors and halls, and the sudden large number of people that pass through them in moments preceding and proceeding the different activities occurring in the building. To achieve the technical solution for this challenge, several image processing methods, for people detection with still cameras, were first studied. The next step was the development of a real time algorithm, using OpenCV libraries and computer vision concepts,to count individuals with the desired accuracy. This algorithm includes the scientific and technical knowledge acquired in the study of the previous methods. The themes developed in this thesis comprise the fields of background maintenance, shadow and highlight detection, and blob detection and tracking. A graphical interface was also built, to help on the development, test and tunning of the proposed system, as a complement to the work. Furthermore, tests to the system were also performed, to certify the proposed techniques against a set of limited circumstances. The results obtained revealed that the algorithm was successfully applied to count the number of people in complex environments with reliable accuracy.