19 resultados para G-Hilbert Scheme
em Instituto Politécnico do Porto, Portugal
Resumo:
On this paper we present a modified regularization scheme for Mathematical Programs with Complementarity Constraints. In the regularized formulations the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In our approach both the complementarity condition and the nonnegativity constraints are relaxed. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Objective Deregulation of FAS/FASL system may lead to immune escape and influence bacillus Calmette-Guérin (BCG) immunotherapy outcome, which is currently the gold standard adjuvant treatment for high-risk non–muscle invasive bladder tumors. Among other events, functional promoter polymorphisms of FAS and FASL genes may alter their transcriptional activity. Therefore, we aim to evaluate the role of FAS and FASL polymorphisms in the context of BCG therapy, envisaging the validation of these biomarkers to predict response. Patients and methods DNA extracted from peripheral blood from 125 patients with bladder cancer treated with BCG therapy was analyzed by Polymerase Chain Reaction—Restriction Fragment Length Polymorphism for FAS-670 A/G and FASL-844 T/C polymorphisms. FASL mRNA expression was analyzed by real-time Polymerase Chain Reaction. Results Carriers of FASL-844 CC genotype present a decreased recurrence-free survival after BCG treatment when compared with FASL-844 T allele carriers (mean 71.5 vs. 97.8 months, P = 0.030) and have an increased risk of BCG treatment failure (Hazard Ratio = 1.922; 95% Confidence Interval: [1.064–3.471]; P = 0.030). Multivariate analysis shows that FASL-844 T/C and therapeutics scheme are independent predictive markers of recurrence after treatment. The evaluation of FASL gene mRNA levels demonstrated that patients carrying FASL-844 CC genotype had higher FASL expression in bladder tumors (P = 0.0027). Higher FASL levels were also associated with an increased risk of recurrence after BCG treatment (Hazard Ratio = 2.833; 95% Confidence Interval: [1.012–7.929]; P = 0.047). FAS-670 A/G polymorphism analysis did not reveal any association with BCG therapy outcome. Conclusions Our results suggest that analysis of FASL-844 T/C, but not FAS-670 A/G polymorphisms, may be used as a predictive marker of response to BCG immunotherapy.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
Critical Infrastructures became more vulnerable to attacks from adversaries as SCADA systems become connected to the Internet. The open standards for SCADA Communications make it very easy for attackers to gain in-depth knowledge about the working and operations of SCADA networks. A number of Intenrnet SCADA security issues were raised that have compromised the authenticity, confidentiality, integrity and non-repudiation of information transfer between SCADA Components. This paper presents an integration of the Cross Crypto Scheme Cipher to secure communications for SCADA components. The proposed scheme integrates both the best features of symmetric and asymmetric encryptiontechniques. It also utilizes the MD5 hashing algorithm to ensure the integrity of information being transmitted.
Resumo:
Secure group communication is a paradigm that primarily designates one-to-many communication security. The proposed works relevant to secure group communication have predominantly considered the whole network as being a single group managed by a central powerful node capable of supporting heavy communication, computation and storage cost. However, a typical Wireless Sensor Network (WSN) may contain several groups, and each one is maintained by a sensor node (the group controller) with constrained resources. Moreover, the previously proposed schemes require a multicast routing support to deliver the rekeying messages. Nevertheless, multicast routing can incur heavy storage and communication overheads in the case of a wireless sensor network. Due to these two major limitations, we have reckoned it necessary to propose a new secure group communication with a lightweight rekeying process. Our proposal overcomes the two limitations mentioned above, and can be applied to a homogeneous WSN with resource-constrained nodes with no need for a multicast routing support. Actually, the analysis and simulation results have clearly demonstrated that our scheme outperforms the previous well-known solutions.
Resumo:
The interlaminar fracture toughness in pure mode II (GIIc) of a Carbon-Fibre Reinforced Plastic (CFRP) composite is characterized experimentally and numerically in this work, using the End-Notched Flexure (ENF) fracture characterization test. The value of GIIc was extracted by a new data reduction scheme avoiding the crack length measurement, named Compliance-Based Beam Method (CBBM). This method eliminates the crack measurement errors, which can be non-negligible, and reflect on the accuracy of the fracture energy calculations. Moreover, it accounts for the Fracture Process Zone (FPZ) effects. A numerical study using the Finite Element Method (FEM) and a triangular cohesive damage model, implemented within interface finite elements and based on the indirect use of Fracture Mechanics, was performed to evaluate the suitability of the CBBM to obtain GIIc. This was performed comparing the input values of GIIc in the numerical models with the ones resulting from the application of the CBBM to the numerical load-displacement (P-) curve. In this numerical study, the Compliance Calibration Method (CCM) was also used to extract GIIc, for comparison purposes.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
The mode III interlaminar fracture of carbon/epoxy laminates was evaluated with the edge crack torsion (ECT) test. Three-dimensional finite element analyses were performed in order to select two specimen geometries and an experimental data reduction scheme. Test results showed considerable non-linearity before the maximum load point and a significant R-curve effect. These features prevented an accurate definition of the initiation point. Nevertheless, analyses of non-linearity zones showed two likely initiation points corresponding to GIIIc values between 850 and 1100 J/m2 for both specimen geometries. Although any of these values is realistic, the range is too broad, thus showing the limitations of the ECT test and the need for further research.
Resumo:
Este artigo apresenta uma nova abordagem (MM-GAV-FBI), aplicável ao problema da programação de projectos com restrições de recursos e vários modos de execução por actividade, problema conhecido na literatura anglo-saxónica por MRCPSP. Cada projecto tem um conjunto de actividades com precedências tecnológicas definidas e um conjunto de recursos limitados, sendo que cada actividade pode ter mais do que um modo de realização. A programação dos projectos é realizada com recurso a um esquema de geração de planos (do inglês Schedule Generation Scheme - SGS) integrado com uma metaheurística. A metaheurística é baseada no paradigma dos algoritmos genéticos. As prioridades das actividades são obtidas a partir de um algoritmo genético. A representação cromossómica utilizada baseia-se em chaves aleatórias. O SGS gera planos não-atrasados. Após a obtenção de uma solução é aplicada uma melhoria local. O objectivo da abordagem é encontrar o melhor plano (planning), ou seja, o plano que tenha a menor duração temporal possível, satisfazendo as precedências das actividades e as restrições de recursos. A abordagem proposta é testada num conjunto de problemas retirados da literatura da especialidade e os resultados computacionais são comparados com outras abordagens. Os resultados computacionais validam o bom desempenho da abordagem, não apenas em termos de qualidade da solução, mas também em termos de tempo útil.
Resumo:
Gamma radiations measurements were carried out in the vicinity of a coal-fired power plant located in the southwest coastline of Portugal. Two different gamma detectors were used to assess the environmental radiation within a circular area of 20 km centred in the coal plant: a scintillometer (SPP2 NF, Saphymo) and a high purity germanium detector (HPGe, Canberra). Fifty urban and suburban measurements locations were established within the defined area and two measurements campaigns were carried out. The results of the total gamma radiation ranged from 20.83 to 98.33 counts per second (c.p.s.) for both measurement campaigns and outdoor doses rates ranged from 77.65 to 366.51 Gy/h. Natural emitting nuclides from the U-238 and Th-232 decay series were identified as well as the natural emitting nuclide K-40. The radionuclide concentration from the uranium and thorium series determined by gamma spectrometry ranged from 0.93 to 73.68 Bq/kg, while for K-40 the concentration ranged from 84.14 to 904.38 Bq/kg. The obtained results were used primarily to define the variability in measured environmental radiation and to determine the coal plant’s influence in the measured radiation levels. The highest values were measured at two locations near the power plant and at locations between the distance of 6 and 20 km away from the stacks, mainly in the prevailing wind direction. The results showed an increase or at least an influence from the coal-fired plant operations, both qualitatively and quantitatively.
Resumo:
OBJECTIVE: To evaluate the predictive value of genetic polymorphisms in the context of BCG immunotherapy outcome and create a predictive profile that may allow discriminating the risk of recurrence. MATERIAL AND METHODS: In a dataset of 204 patients treated with BCG, we evaluate 42 genetic polymorphisms in 38 genes involved in the BCG mechanism of action, using Sequenom MassARRAY technology. Stepwise multivariate Cox Regression was used for data mining. RESULTS: In agreement with previous studies we observed that gender, age, tumor multiplicity and treatment scheme were associated with BCG failure. Using stepwise multivariate Cox Regression analysis we propose the first predictive profile of BCG immunotherapy outcome and a risk score based on polymorphisms in immune system molecules (SNPs in TNFA-1031T/C (rs1799964), IL2RA rs2104286 T/C, IL17A-197G/A (rs2275913), IL17RA-809A/G (rs4819554), IL18R1 rs3771171 T/C, ICAM1 K469E (rs5498), FASL-844T/C (rs763110) and TRAILR1-397T/G (rs79037040) in association with clinicopathological variables. This risk score allows the categorization of patients into risk groups: patients within the Low Risk group have a 90% chance of successful treatment, whereas patients in the High Risk group present 75% chance of recurrence after BCG treatment. CONCLUSION: We have established the first predictive score of BCG immunotherapy outcome combining clinicopathological characteristics and a panel of genetic polymorphisms. Further studies using an independent cohort are warranted. Moreover, the inclusion of other biomarkers may help to improve the proposed model.
Resumo:
The increasing and intensive integration of distributed energy resources into distribution systems requires adequate methodologies to ensure a secure operation according to the smart grid paradigm. In this context, SCADA (Supervisory Control and Data Acquisition) systems are an essential infrastructure. This paper presents a conceptual design of a communication and resources management scheme based on an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). The methodology is used to support the energy resource management considering all the involved costs, power flows, and electricity prices leading to the network reconfiguration. The methodology also addresses the definition of the information access permissions of each player to each resource. The paper includes a 33-bus network used in a case study that considers an intensive use of distributed energy resources in five distinct implemented operation contexts.
Resumo:
The European Union Emissions Trading Scheme (EU ETS) is a cornerstone of the European Union's policy to combat climate change and its key tool for reducing industrial greenhouse gas emissions cost-effectively. The purpose of the present work is to evaluate the influence of CO2 opportunity cost on the Spanish wholesale electricity price. Our sample includes all Phase II of the EU ETS and the first year of Phase III implementation, from January 2008 to December 2013. A vector error correction model (VECM) is applied to estimate not only long-run equilibrium relations, but also short-run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The four commodities prices are modeled as joint endogenous variables with air temperature and renewable energy as exogenous variables. We found a long-run relationship (cointegration) between electricity price, carbon price, and fuel prices. By estimating the dynamic pass-through of carbon price into electricity price for different periods of our sample, it is possible to observe the weakening of the link between carbon and electricity prices as a result from the collapse on CO2 prices, therefore compromising the efficacy of the system to reach proposed environmental goals. This conclusion is in line with the need to shape new policies within the framework of the EU ETS that prevent excessive low prices for carbon over extended periods of time.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.