55 resultados para Supporting methodology
Resumo:
A QuEChERS method for the extraction of ochratoxin A (OTA) from bread samples was evaluated. A factorial design (23) was used to find the optimal QuEChERS parameters (extraction time, extraction solvent volume and sample mass). Extracts were analysed by LC with fluorescence detection. The optimal extraction conditions were: 5 g of sample, 15 mL of acetonitrile and 3 min of agitation. The extraction procedure was validated by systematic recovery experiments at three levels. The recoveries obtained ranged from 94.8% (at 1.0 μg kg -1) to 96.6% (at 3.0 μg kg -1). The limit of quantification of the method was 0.05 μg kg -1. The optimised procedure was applied to 20 samples of different bread types (‘‘Carcaça’’, ‘‘Broa de Milho’’, and ‘‘Broa de Avintes’’) highly consumed in Portugal. None of the samples exceeded the established European legal limit of 3 μg kg -1.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
A Box–Behnken factorial design coupled with surface response methodology was used to evaluate the effects of temperature, pH and initial concentration in the Cu(II) sorption process onto the marine macroalgae Ascophyllum nodosum. The effect of the operating variables on metal uptake capacitywas studied in a batch system and a mathematical model showing the influence of each variable and their interactions was obtained. Study ranges were 10–40ºC for temperature, 3.0–5.0 for pH and 50–150mgL−1 for initial Cu(II) concentration. Within these ranges, the biosorption capacity is slightly dependent on temperature but markedly increases with pH and initial concentration of Cu(II). The uptake capacities predicted by the model are in good agreement with the experimental values. Maximum biosorption capacity of Cu(II) by A. nodosum is 70mgg−1 and corresponds to the following values of those variables: temperature = 40ºC, pH= 5.0 and initial Cu(II) concentration = 150mgL−1.
Resumo:
A method for the determination of some pesticide residues in must and wine samples was developed using solid-phase microextraction (SPME) and gas chromatography – electron capture detection (GC/ECD). The procedure only needs dilution as sample pre-treatment and is therefore simple, fast and solvent-free. Eight fungicides (vinclozolin, procymidone, iprodione, penconazole, fenarimol, folpet, nuarimol and hexaconazole), one insecticide (chlorpyriphos) and two acaricides (bromopropylate and tetradifon) can be quantified. Good linearity was observed for all the compounds in the range 5–100 µg/L. The reproducibility of the measurements was found acceptable (with RSD’s below 20%). Detection limits of 11 µg/L, on average, are sufficiently below the proposed maximum residue limits (MRL’s) for these compounds in wine. The analytical method was applied to the determination of these compounds in Portuguese must and wine samples from the Demarcated Region of Alentejo, where any residues could be detected.
Resumo:
Microwave-assisted extraction (MAE) of agar from Gracilaria vermiculophylla, produced in an integrated multitrophic aquaculture (IMTA) system, from Ria de Aveiro (northwestern Portugal), was tested and optimized using response surface methodology. The influence of the MAE operational parameters (extraction time, temperature, solvent volume and stirring speed) on the physical and chemical properties of agar (yield, gel strength, gelling and melting temperatures, as well as, sulphate and 3,6-anhydro-Lgalactose contents) was evaluated in a 2^4 orthogonal composite design. The quality of the extracted agar compared favorably with the attained using traditional extraction (2 h at 85ºC) while reducing drastically extraction time, solvent consumption and waste disposal requirements. Agar MAE optimum results were: an yield of 14.4 ± 0.4%, a gel strength of 1331 ± 51 g/cm2, 40.7 ± 0.2 _C gelling temperature, 93.1 ± 0.5ºC melting temperature, 1.73 ± 0.13% sulfate content and 39.4 ± 0.3% 3,6-anhydro-L-galactose content. Furthermore, this study suggests the feasibility of the exploitation of G. vermiculophylla grew in IMTA systems for agar production.
Resumo:
Serious games are starting to attain a higher role as tools for learning in various contexts, but in particular in areas such as education and training. Due to its characteristics, such as rules, behavior simulation and feedback to the player's actions, serious games provide a favorable learning environment where errors can occur without real life penalty and students get instant feedback from challenges. These challenges are in accordance with the intended objectives and will self-adapt and repeat according to the student’s difficulty level. Through motivating and engaging environments, which serve as base for problem solving and simulation of different situations and contexts, serious games have a great potential to aid players developing professional skills. But, how do we certify the acquired knowledge and skills? With this work we intend to propose a methodology to establish a relationship between the game mechanics of serious games and an array of competences for certification, evaluating the applicability of various aspects in the design and development of games such as the user interfaces and the gameplay, obtaining learning outcomes within the game itself. Through the definition of game mechanics combined with the necessary pedagogical elements, the game will ensure the certification. This paper will present a matrix of generic skills, based on the European Framework of Qualifications, and the definition of the game mechanics necessary for certification on tour guide training context. The certification matrix has as reference axes: skills, knowledge and competencies, which describe what the students should learn, understand and be able to do after they complete the learning process. The guides-interpreters welcome and accompany tourists on trips and visits to places of tourist interest and cultural heritage such as museums, palaces and national monuments, where they provide various information. Tour guide certification requirements include skills and specific knowledge about foreign languages and in the areas of History, Ethnology, Politics, Religion, Geography and Art of the territory where it is inserted. These skills are communication, interpersonal relationships, motivation, organization and management. This certification process aims to validate the skills to plan and conduct guided tours on the territory, demonstrate knowledge appropriate to the context and finally match a good group leader. After defining which competences are to be certified, the next step is to delineate the expected learning outcomes, as well as identify the game mechanics associated with it. The game mechanics, as methods invoked by agents for interaction with the game world, in combination with game elements/objects allows multiple paths through which to explore the game environment and its educational process. Mechanics as achievements, appointments, progression, reward schedules or status, describe how game can be designed to affect players in unprecedented ways. In order for the game to be able to certify tour guides, the design of the training game will incorporate a set of theoretical and practical tasks to acquire skills and knowledge of various transversal themes. For this end, patterns of skills and abilities in acquiring different knowledge will be identified.
Resumo:
The application of information technologies (specially the Internet, Web 2.0 and social tools) make informal learning more visible. This kind of learning is not linked to an institution or a period of time, but it is important enough to be taken into account. On the one hand, learners should be able to communicate to the institutions they are related to, what skills they possess, whether they were achieved in a formal or informal way. On the other hand the companies and educational institutions need to have a deeper knowledge about the competencies of their staff. The TRAILER project provides a methodology supported by a technological framework to facilitate communication about informal learning between businesses, employees and learners. The paper presents the project and some of the work carried out, an exploratory analysis about how informal learning is considered and the technological framework proposed. Whilst challenges remain in terms of establishing the meaningfulness of technological engagement for employees and businesses, the continuing transformation of the social, technological and educational environment is likely to lead to greater emphasis for the effective exploitation of informal learning.
Resumo:
An analytical method using microwave-assisted extraction (MAE) and liquid chromatography (LC) with fluorescence detection (FD) for the determination of ochratoxin A (OTA) in bread samples is described. A 24 orthogonal composite design coupled with response surface methodology was used to study the influence of MAE parameters (extraction time, temperature, solvent volume, and stirring speed) in order to maximize OTA recovery. The optimized MAE conditions were the following: 25 mL of acetonitrile, 10 min of extraction, at 80 °C, and maximum stirring speed. Validation of the overall methodology was performed by spiking assays at five levels (0.1–3.00 ng/g). The quantification limit was 0.005 ng/g. The established method was then applied to 64 bread samples (wheat, maize, and wheat/maize bread) collected in Oporto region (Northern Portugal). OTAwas detected in 84 % of the samples with a maximum value of 2.87 ng/g below the European maximum limit established for OTA in cereal products of 3 ng/g.
Resumo:
Over the last few years, there has been a growing concern about the presence of pharmaceuticals in the environment. The main objective of this study was to develop and validate an SPE method using surface response methodology for the determination of ibuprofen in different types of water samples. The influence of sample pH and sample volume on the ibuprofen recovery was studied. The effect of each studied independent variable is pronounced on the dependent variable (ibuprofen recovery). Good selectivity, extraction efficiency, and precision were achieved using 600 mL of sample volume with the pH adjusted to 2.2. LC with fluorescence detection was employed. The optimized method was applied to 20 water samples from the North and South of Portugal.
Resumo:
With the advent of Web 2.0, new kinds of tools became available, which are not seen as novel anymore but are widely used. For instance, according to Eurostat data, in 2010 32% of individuals aged 16 to 74 used the Internet to post messages to social media sites or instant messaging tools, ranging from 17% in Romania to 46% in Sweden (Eurostat, 2012). Web 2.0 applications have been used in technology-enhanced learning environments. Learning 2.0 is a concept that has been used to describe the use of social media for learning. Many Learning 2.0 initiatives have been launched by educational and training institutions in Europe. Web 2.0 applications have also been used for informal learning. Web 2.0 tools can be used in classrooms, virtual or not, not only to engage students but also to support collaborative activities. Many of these tools allow users to use tags to organize resources and facilitate their retrieval at a later date or time. The aim of this chapter is to describe how tagging has been used in systems that support formal or informal learning and to summarize the functionalities that are common to these systems. In addition, common and unusual tagging applications that have been used in some Learning Objects Repositories are analysed.
Resumo:
The development of new products or processes involves the creation, re-creation and integration of conceptual models from the related scientific and technical domains. Particularly, in the context of collaborative networks of organisations (CNO) (e.g. a multi-partner, international project) such developments can be seriously hindered by conceptual misunderstandings and misalignments, resulting from participants with different backgrounds or organisational cultures, for example. The research described in this article addresses this problem by proposing a method and the tools to support the collaborative development of shared conceptualisations in the context of a collaborative network of organisations. The theoretical model is based on a socio-semantic perspective, while the method is inspired by the conceptual integration theory from the cognitive semantics field. The modelling environment is built upon a semantic wiki platform. The majority of the article is devoted to developing an informal ontology in the context of a European R&D project, studied using action research. The case study results validated the logical structure of the method and showed the utility of the method.
Resumo:
In this paper we describe how to integrate Internet Protocols (IP) into a typical hierarchical master-slave fieldbus network, supporting a logical ring token passing mechanism between master stations. The integration of the TCP/IP protocols in the fieldbus protocol rises a number of issues that must be addressed properly. In this paper we particularly address the issues related to the conveyance of IP fragments in fieldbus frames (fragmentation/de-fragmentation) and on how to support the symmetry inherent to the TCP/IP protocols in fieldbus slaves, which lack communication initiative.
Resumo:
Profibus networks are widely used as the communication infrastructure for supporting distributed computer-controlled applications. Most of the times, these applications impose strict real-time requirements. Profibus-DP has gradually become the preferred Profibus application profile. It is usually implemented as a mono-master Profibus network, and is optimised for speed and efficiency. The aim of this paper is to analyse the real-time behaviour of this class of Profibus networks. Importantly, we develop a new methodology for evaluating the worst-case message response time in systems where high-priority and cyclic low-priority Profibus traffic coexist. The proposed analysis constitutes a powerful tool to guarantee prior to runtime the real-time behaviour of a distributed computer-controlled system based on a Profibus network, where the realtime traffic is supported either by high-priority or by cyclic poll Profibus messages.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within process control applications. Therefore, they constitute the foundation upon which real-time distributed computer-controlled systems can be implemented. P-NET is a fieldbus communication standard, which uses a virtual token-passing medium-access-control mechanism. In this paper pre-run-time schedulability conditions for supporting real-time traffic with P-NET networks are established. Essentially, formulae to evaluate the upper bound of the end-to-end communication delay in P-NET messages are provided. Using this upper bound, a feasibility test is then provided to check the timing requirements for accessing remote process variables. This paper also shows how P-NET network segmentation can significantly reduce the end-to-end communication delays for messages with stringent timing requirements.