132 resultados para Concurrency control algorithms
Resumo:
This paper studies Optimal Intelligent Supervisory Control System (OISCS) model for the design of control systems which can work in the presence of cyber-physical elements with privacy protection. The development of such architecture has the possibility of providing new ways of integrated control into systems where large amounts of fast computation are not easily available, either due to limitations on power, physical size or choice of computing elements.
Resumo:
This article describes a new approach in the Intelligent Training of Operators in Power Systems Control Centres, considering the new reality of Renewable Sources, Distributed Generation, and Electricity Markets, under the emerging paradigms of Cyber-Physical Systems and Ambient Intelligence. We propose Intelligent Tutoring Systems as the approach to deal with the intelligent training of operators in these new circumstances.
Resumo:
A supervisory control and data acquisition (SCADA) system is an integrated platform that incorporates several components and it has been applied in the field of power systems and several engineering applications to monitor, operate and control a lot of processes. In the future electrical networks, SCADA systems are essential for an intelligent management of resources like distributed generation and demand response, implemented in the smart grid context. This paper presents a SCADA system for a typical residential house. The application is implemented on MOVICON™11 software. The main objective is to manage the residential consumption, reducing or curtailing loads to keep the power consumption in or below a specified setpoint, imposed by the costumer and the generation availability.
Resumo:
Control Centre operators are essential to assure a good performance of Power Systems. Operators’ actions are critical in dealing with incidents, especially severe faults, like blackouts. In this paper we present an Intelligent Tutoring approach for training Portuguese Control Centre operators in incident analysis and diagnosis, and service restoration of Power Systems, offering context awareness and an easy integration in the working environment.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
Em Portugal, como em outros países, podem ser encontrados milhares de trabalhadores com doenças e outros problemas resultantes da exposição aos compostos orgânicos voláteis (COV’s). No entanto são poucos os estudos aplicados à actividade industrial. Neste trabalho validou-se/estimou-se a exposição ocupacional a COV´s dos trabalhadores que executam a tarefa de ‘aplicação de revestimento do mobiliário’, mais conhecida por ‘acabamento de móveis’ na secção de acabamentos do sector de mobiliário de madeira, que se situa nos concelhos de Paços de Ferreira e Paredes, concelhos do Norte de Portugal. A amostra foi constituída por 17 empresas e foram avaliados 34 tarefas que corresponde ao mesmo número de trabalhadores, uma vez que eles executam essa tarefa durante as oito horas diárias de trabalho. Para avaliar a exposição dos trabalhadores foram utilizadas duas abordagens da higiene do trabalho: a pragmática, através da aplicação do método Toolkit e a tradicional com recurso a amostragens de ar e análises laboratoriais. Os resultados obtidos sugerem que o Toolkit é uma boa ferramenta para ser utilizada pelas Pequenas e Médias Empresas (PME’s), que trabalhem com substâncias em pó ou líquidas. É um método expedito e que não acarreta grandes esforços financeiros. Verificou-se ainda que a maioria dos trabalhadores estão em risco de exposição a COV’s. Sendo necessário tomar medidas de controlo.
Resumo:
Grande parte dos triples-stores são open source e desenvolvidos em Java, disponibilizando interfaces standards e privadas de acesso. A grande maioria destes sistemas não dispõe de mecanismos de controlo de acessos nativos, o que dificulta ou impossibilita a sua adopção em ambientes em que a segurança dos factos é importante (e.g. ambiente empresarial). Complementarmente observa-se que o modelo de controlo de acesso a triplos e em particular a triplos descritos por ontologias não está standardizado nem sequer estabilizado, havendo diversos modelos de descrição e algoritmos de avaliação de permissões de acesso. O trabalho desenvolvido nesta tese/dissertação propõe um modelo e interface de controlo de acesso que permite e facilite a sua adopção por diferentes triple-stores já existentes e a integração dos triples-stores com outros sistemas já existentes na organização. Complementarmente, a plataforma de controlo de acesso não impõe qualquer modelo ou algoritmo de avaliação de permissões, mas pelo contrário permite a adopção de modelos e algoritmos distintos em função das necessidades ou desejos. Finalmente demonstra-se a aplicabilidade e validade do modelo e interface propostos, através da sua implementação e adopção ao triple-store SwiftOWLIM já existente, que não dispõe de mecanismo de controlo de acessos nativo.
Resumo:
Over the last two decades the research and development of legged locomotion robots has grown steadily. Legged systems present major advantages when compared with ‘traditional’ vehicles, because they allow locomotion in inaccessible terrain to vehicles with wheels and tracks. However, the robustness of legged robots, and especially their energy consumption, among other aspects, still lag behind mechanisms that use wheels and tracks. Therefore, in the present state of development, there are several aspects that need to be improved and optimized. Keeping these ideas in mind, this paper presents the review of the literature of different methods adopted for the optimization of the structure and locomotion gaits of walking robots. Among the distinct possible strategies often used for these tasks are referred approaches such as the mimicking of biological animals, the use of evolutionary schemes to find the optimal parameters and structures, the adoption of sound mechanical design rules, and the optimization of power-based indexes.
Resumo:
Electroanalytical methods based on square-wave adsorptive-stripping voltammetry (SWAdSV) and flow-injection analysis with square-wave adsorptive-stripping voltammetric detection (FIA-SWAdSV) were developed for the determination of fluoxetine (FXT). The methods were based on the reduction of FXT at a mercury drop electrode at -1.2 V versus Ag/AgCl, in a phosphate buffer of pH 12.0, and on the possibility of accumulating the compound at the electrode surface. The SWAdSV method was successfully applied in the quantification of FXT in pharmaceutical products, human serum samples, and in drug dissolution studies. Because the presence of dissolved oxygen did not interfere significantly with the analysis, it was possible to quantify FXT in several pharmaceutical products using FIA-SWAdSV. This method enables analysis of up to 120 samples per hour at reduced costs.
Resumo:
Celiac disease (CD) is an autoimmune enteropathy, characterized by an inappropriate T-cell-mediated immune response to the ingestion of certain dietary cereal proteins in genetically susceptible individuals. This disorder presents environmental, genetic, and immunological components. CD presents a prevalence of up to 1% in populations of European ancestry, yet a high percentage of cases remain underdiagnosed. The diagnosis and treatment should be made early since untreated disease causes growth retardation and atypical symptoms, like infertility or neurological disorders. The diagnostic criteria for CD, which requires endoscopy with small bowel biopsy, have been changing over the last few decades, especially due to the advent of serological tests with higher sensitivity and specificity. The use of serological markers can be very useful to rule out clinical suspicious cases and also to help monitor the patients, after adherence to a gluten-free diet. Since the current treatment consists of a life-long glutenfree diet, which leads to significant clinical and histological improvement, the standardization of an assay to assess in an unequivocal way gluten in gluten-free foodstuff is of major importance.
Resumo:
The purpose of the present work is to determine the antioxidant capacity (AC) of 27 commercial beers. The AC indicates the degree of protection of a certain organism against oxidative damage provoked by reactive oxygen and nitrogen species. Assays were carried out by the following methods: (i) total radical trapping antioxidant parameter (TRAP); (ii) trolox equivalent antioxidant capacity (TEAC); (iii) trolox equivalent antioxidant capacity (DPPH); (iv) ferric-ion reducing antioxidant parameter (FRAP); (v) cupric reducing antioxidant capacity (CUPRAC); (vi) oxygen radical absorbance capacity (ORAC). Ascorbic acid (AA), gallic acid (GA) and trolox (TR) were used as standards. All beers showed antioxidant power, but a wide range of ACs was observed. The effect of several factors upon these differences was studied. Statistical differences were found between ACs of beers of different colours. ORAC method provided always higher experimental ACs, of significant statistical differences to other assays.
Resumo:
The state of the art of voltammetric and amperometric methods used in the study and determination of pesticides in crops, food, phytopharmaceutical products, and environmental samples is reviewed. The main structural groups of pesticides, i.e., triazines, organophosphates, organochlorides, nitrocompounds, carbamates, thiocarbamates, sulfonylureas, and bipyridinium compounds are considered with some degradation products. The advantages, drawbacks, and trends in the development of voltammetric and amperometric methods for study and determination of pesticides in these samples are discussed.