41 resultados para two-loop diagram

em Instituto Politécnico do Porto, Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho de pesquisa e desenvolvimento tem como fundamento principal o Conceito de Controlo por Lógica Difusa. Utilizando as ferramentas do software Matlab, foi possível desenvolver um controlador com base na inferência difusa que permitisse controlar qualquer tipo de sistema físico real, independentemente das suas características. O Controlo Lógico Difuso, do inglês “Fuzzy Control”, é um tipo de controlo muito particular, pois permite o uso simultâneo de dados numéricos com variáveis linguísticas que tem por base o conhecimento heurístico dos sistemas a controlar. Desta forma, consegue-se quantificar, por exemplo, se um copo está “meio cheio” ou “meio vazio”, se uma pessoa é “alta” ou “baixa”, se está “frio” ou “muito frio”. O controlo PID é, sem dúvida alguma, o controlador mais amplamente utilizado no controlo de sistemas. Devido à sua simplicidade de construção, aos reduzidos custos de aplicação e manutenção e aos resultados que se obtêm, este controlador torna-se a primeira opção quando se pretende implementar uma malha de controlo num determinado sistema. Caracterizado por três parâmetros de ajuste, a saber componente proporcional, integral e derivativa, as três em conjunto permitem uma sintonia eficaz de qualquer tipo de sistema. De forma a automatizar o processo de sintonia de controladores e, aproveitando o que melhor oferece o Controlo Difuso e o Controlo PID, agrupou-se os dois controladores, onde em conjunto, como poderemos constatar mais adiante, foram obtidos resultados que vão de encontro com os objectivos traçados. Com o auxílio do simulink do Matlab, foi desenvolvido o diagrama de blocos do sistema de controlo, onde o controlador difuso tem a tarefa de supervisionar a resposta do controlador PID, corrigindo-a ao longo do tempo de simulação. O controlador desenvolvido é denominado por Controlador FuzzyPID. Durante o desenvolvimento prático do trabalho, foi simulada a resposta de diversos sistemas à entrada em degrau unitário. Os sistemas estudados são na sua maioria sistemas físicos reais, que representam sistemas mecânicos, térmicos, pneumáticos, eléctricos, etc., e que podem ser facilmente descritos por funções de transferência de primeira, segunda e de ordem superior, com e sem atraso.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Int’l J. of Information and Communication Technology Education, 3(2), 1-14, April-June 2007

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idiomatic expression “In Rome be a Roman” can be applied to leadership training and development as well. Leaders who can act as role models inspire other future leaders in their behaviour, attitudes and ways of thinking. Based on two examples of current leaders in the fields of Politics and Public Administration, I support the idea that exposure to role models during their training was decisive for their career paths and current activities as prominent characters in their profession. Issues such as how students should be prepared for community or national leadership as well as cross-cultural engagement are raised here. The hypothesis of transculturalism and cross-cultural commitment as a factor of leadership is presented. Based on current literature on Leadership as well as the presented case studies, I expect to raise a debate focusing on strategies for improving leaders’ training in their cross-cultural awareness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The post-surgical period is often critical for infection acquisition. The combination of patient injury and environmental exposure through breached skin add risk to pre-existing conditions such as drug or depressed immunity. Several factors such as the period of hospital staying after surgery, base disease, age, immune system condition, hygiene policies, careless prophylactic drug administration and physical conditions of the healthcare centre may contribute to the acquisition of a nosocomial infection. A purulent wound can become complicated whenever antimicrobial therapy becomes compromised. In this pilot study, we analysed Enterobacteriaceae strains, the most significant gram-negative rods that may occur in post-surgical skin and soft tissue infections (SSTI) presenting reduced β-lactam susceptibility and those presenting extended-spectrum β-lactamases (ESBL). There is little information in our country regarding the relationship between β-lactam susceptibility, ESBL and development of resistant strains of microorganisms in SSTI. Our main results indicate Escherichia coli and Klebsiella spp. are among the most frequent enterobacteria (46% and 30% respectively) with ESBL production in 72% of Enterobacteriaceae isolates from SSTI. Moreover, coinfection occurred extensively, mainly with Pseudomonas aeruginosa and Methicillin-resistant Staphylococcus aureus (18% and 13%, respectively). These results suggest future research to explore if and how these associations are involved in the development of antibiotic resistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No trabalho realizado nesta tese, procedeu-se ao estudo hidrodinâmico de uma coluna de borbulhamento de recirculação externa do líquido (CREL), que permitiu ampliar o conhecimento já existente sobre este tipo de colunas. Para realizar este estudo utilizaram-se líquidos viscosos Newtonianos, nomeadamente soluções aquosas de glicerina com viscosidades entre 0,007 e 0,522 Pa.s. Para a gama de caudais de ar injectados, 1,5x10-5 até 1,35x10-4 m3/s, o ar ascendia ao longo da coluna de borbulhamento sob a forma de bolhas tubulares. Após a realização dos ensaios, verificou-se que o regime de escoamento do líquido entre bolhas tubulares variou desde o tipo laminar até transição (1,9

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hoje em dia algumas das principais preocupações que se tem na exploração a céu aberto, são a diminuição de custos e a máxima rentabilidade dos equipamentos. São dois aspectos que estão interligados uma vez que a rentabilização dos equipamentos tem como resultado directo a diminuição dos custos de todas as operações inerentes à exploração a céu aberto e, consequentemente, a diminuição dos custos finais de produção. É por essa lógica de pensamento que se procurou perceber e compreender o funcionamento e a rentabilidade dos equipamentos em função dos diferentes estados de fracturação do maciço rochoso. Este estudo foi realizado numa pedreira a norte de Portugal e complementa outros estudos já realizados, com o objectivo de definir características do diagrama de fogo que assegurem a maior rentabilidade da referida exploração. O estudo baseia-se em determinar os rendimentos da pá carregadora calculando os tempos de ciclo, isto é, o tempo que a pá demorou a carregar, a movimentar e a descarregar o material desmontado dos vários rebentamentos. Calculou-se o rendimento do martelo demolidor na fragmentação de grandes blocos, que não entrariam directamente no britador primário, o qual também foi alvo de estudo, nomeadamente, no que diz respeito aos tempos de encravamento e de britagem, onde se tentou correlacionar esses tempos com os vários desmontes e estimou-se o consumo de energia do britador primário utilizando a equação de Bond. Por fim, realizou-se um estudo comparativo do consumo energético entre as várias fases da exploração a céu aberto. Foram realizados levantamentos geológico-geotécnicos de superfícies de descontinuidades recorrendo à técnica de amostragem linear nas superfícies do maciço rochoso para perceber o tipo de fragmentação e orientação do mesmo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The occurrence of OTA in fresh and packed wheat and in maize bread and the evaluation of the exposure degree through their consumption in two Portuguese populations from Porto and Coimbra, during the winter of 2007, were studied. One hundred and sixty eight bread samples, 61 maize and 107 wheat, were analysed by liquid chromatography–fluorescence detection (LC–FD). The results showed that 84% of samples were contaminated, with a maximum level of 3.85 ng/g (above the EU maximum limit, 3 ng/g). Fresh wheat bread presented higher levels than packed wheat bread. Moreover, the traditional maize bread, in either city, was consistently more contaminated than wheat bread, 0.25 vs 0.19 ng/g, and 0.48 vs 0.34 ng/ g for Porto and Coimbra, respectively. Avintes maize bread showed the highest mean contamination and maximum levels. The higher estimated daily intake of OTA from both types of bread in the population of Coimbra compared to Porto reflects the higher average contamination of bread in the first city.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of determining a task-toprocessor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct kinds of processors. We propose a polynomialtime approximation scheme (PTAS) for this problem. It offers the following guarantee: for a given task set and a given platform, if there exists a feasible task-to-processor assignment, then given an input parameter, ϵ, our PTAS succeeds, in polynomial time, in finding such a feasible task-to-processor assignment on a platform in which each processor is 1+3ϵ times faster. In the simulations, our PTAS outperforms the state-of-the-art PTAS [1] and also for the vast majority of task sets, it requires significantly smaller processor speedup than (its upper bound of) 1+3ϵ for successfully determining a feasible task-to-processor assignment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a single processor and a software system. The software system comprises components and interfaces where each component has an associated interface and each component comprises a set of constrained-deadline sporadic tasks. A scheduling algorithm (called global scheduler) determines at each instant which component is active. The active component uses another scheduling algorithm (called local scheduler) to determine which task is selected for execution on the processor. The interface of a component makes certain information about a component visible to other components; the interfaces of all components are used for schedulability analysis. We address the problem of generating an interface for a component based on the tasks inside the component. We desire to (i) incur only a small loss in schedulability analysis due to the interface and (ii) ensure that the amount of space (counted in bits) of the interface is small; this is because such an interface hides as much details of the component as possible. We present an algorithm for generating such an interface.