54 resultados para cost comparison
Resumo:
Lunacloud is a cloud service provider with offices in Portugal, Spain, France and UK that focus on delivering reliable, elastic and low cost cloud Infrastructure as a Service (IaaS) solutions. The company currently relies on a proprietary IaaS platform - the Parallels Automation for Cloud Infrastructure (PACI) - and wishes to expand and integrate other IaaS solutions seamlessly, namely open source solutions. This is the challenge addressed in this thesis. This proposal, which was fostered by Eurocloud Portugal Association, contributes to the promotion of interoperability and standardisation in Cloud Computing. The goal is to investigate, propose and develop an interoperable open source solution with standard interfaces for the integrated management of IaaS Cloud Computing resources based on new as well as existing abstraction libraries or frameworks. The solution should provide bothWeb and application programming interfaces. The research conducted consisted of two surveys covering existing open source IaaS platforms and PACI (features and API) and open source IaaS abstraction solutions. The first study was focussed on the characteristics of most popular open source IaaS platforms, namely OpenNebula, OpenStack, CloudStack and Eucalyptus, as well as PACI and included a thorough inventory of the provided Application Programming Interfaces (API), i.e., offered operations, followed by a comparison of these platforms in order to establish their similarities and dissimilarities. The second study on existing open source interoperability solutions included the analysis of existing abstraction libraries and frameworks and their comparison. The approach proposed and adopted, which was supported on the conclusions of the carried surveys, reuses an existing open source abstraction solution – the Apache Deltacloud framework. Deltacloud relies on the development of software driver modules to interface with different IaaS platforms, officially provides and supports drivers to sixteen IaaS platform, including OpenNebula and OpenStack, and allows the development of new provider drivers. The latter functionality was used to develop a new Deltacloud driver for PACI. Furthermore, Deltacloud provides a Web dashboard and REpresentational State Transfer (REST) API interfaces. To evaluate the adopted solution, a test bed integrating OpenNebula, Open- Stack and PACI nodes was assembled and deployed. The tests conducted involved time elapsed and data payload measurements via the Deltacloud framework as well as via the pre-existing IaaS platform API. The Deltacloud framework behaved as expected, i.e., introduced additional delays, but no substantial overheads. Both the Web and the REST interfaces were tested and showed identical measurements. The developed interoperable solution for the seamless integration and provision of IaaS resources from PACI, OpenNebula and OpenStack IaaS platforms fulfils the specified requirements, i.e., provides Lunacloud with the ability to expand the range of adopted IaaS platforms and offers a Web dashboard and REST API for the integrated management. The contributions of this work include the surveys and comparisons made, the selection of the abstraction framework and, last, but not the least, the PACI driver developed.
Resumo:
No panorama socioeconómico atual, a contenção de despesas e o corte no financiamento de serviços secundários consumidores de recursos conduzem à reformulação de processos e métodos das instituições públicas, que procuram manter a qualidade de vida dos seus cidadãos através de programas que se mostrem mais eficientes e económicos. O crescimento sustentado das tecnologias móveis, em conjunção com o aparecimento de novos paradigmas de interação pessoa-máquina com recurso a sensores e sistemas conscientes do contexto, criaram oportunidades de negócio na área do desenvolvimento de aplicações com vertente cívica para indivíduos e empresas, sensibilizando-os para a disponibilização de serviços orientados ao cidadão. Estas oportunidades de negócio incitaram a equipa do projeto a desenvolver uma plataforma de notificação de problemas urbanos baseada no seu sistema de informação geográfico para entidades municipais. O objetivo principal desta investigação foca a idealização, conceção e implementação de uma solução completa de notificação de problemas urbanos de caráter não urgente, distinta da concorrência pela facilidade com que os cidadãos são capazes de reportar situações que condicionam o seu dia-a-dia. Para alcançar esta distinção da restante oferta, foram realizados diversos estudos para determinar características inovadoras a implementar, assim como todas as funcionalidades base expectáveis neste tipo de sistemas. Esses estudos determinaram a implementação de técnicas de demarcação manual das zonas problemáticas e reconhecimento automático do tipo de problema reportado nas imagens, ambas desenvolvidas no âmbito deste projeto. Para a correta implementação dos módulos de demarcação e reconhecimento de imagem, foram feitos levantamentos do estado da arte destas áreas, fundamentando a escolha de métodos e tecnologias a integrar no projeto. Neste contexto, serão apresentadas em detalhe as várias fases que constituíram o processo de desenvolvimento da plataforma, desde a fase de estudo e comparação de ferramentas, metodologias, e técnicas para cada um dos conceitos abordados, passando pela proposta de um modelo de resolução, até à descrição pormenorizada dos algoritmos implementados. Por último, é realizada uma avaliação de desempenho ao par algoritmo/classificador desenvolvido, através da definição de métricas que estimam o sucesso ou insucesso do classificador de objetos. A avaliação é feita com base num conjunto de imagens de teste, recolhidas manualmente em plataformas públicas de notificação de problemas, confrontando os resultados obtidos pelo algoritmo com os resultados esperados.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.
Resumo:
As excelentes propriedades mecânicas, associadas ao seu baixo peso, fazem com que os materiais compósitos sejam atualmente dos mais interessantes da nossa sociedade tecnológica. A crescente utilização destes materiais e a excelência dos resultados daí provenientes faz com que estes materiais sejam utilizados em estruturas complexas de responsabilidade, pelo que a sua maquinagem se torna necessária de forma a possibilitar a ligação entre peças. O processo de furação é o mais frequente. O processo de maquinagem de compósitos terá como base os métodos convencionais utilizados nos materiais metálicos. O processo deverá, no entanto, ser convenientemente adaptado, quer a nível de parâmetros, quer a nível de ferramentas a utilizar. As características dos materiais compósitos são bastante particulares pelo que, quando são sujeitos a maquinagem poderão apresentar defeitos tais como delaminação, fissuras intralaminares, arrancamento de fibras ou dano por sobreaquecimento. Para a detecção destes danos, por vezes a inspeção visual não é suficiente, sendo necessário recorrer a processos específicos de análise de danos. Existem já, alguns estudos, cujo âmbito foi a obtenção de furos de qualidade em compósitos, com minimização do dano, não se podendo comparar ainda com a informação existente, no que se refere à maquinagem de materiais metálicos ou ligas metálicas. Desta forma, existe ainda um longo caminho a percorrer, de forma a que o grau de confiança na utilização destes materiais se aproxime aos materiais metálicos. Este trabalho experimental desenvolvido nesta tese assentou essencialmente na furação de placas laminadas e posterior análise dos danos provocados por esta operação. Foi dada especial atenção à medição da delaminação causada pela furação e à resistência mecânica do material após ser maquinado. Os materiais utilizados, para desenvolver este trabalho experimental, foram placas compósitas de carbono/epóxido com duas orientações de fibras diferentes: unidireccionais e em “cross-ply”. Não se conseguiu muita informação, junto do fornecedor, das suas características pelo que se levaram a cabo ensaios que permitiram determinar o seu módulo de elasticidade. Relativamente á sua resistência â tração, como já foi referido, a grande resistência oferecida pelo material, associada às limitações da máquina de ensaios não permitiu chegar a valores conclusivos. Foram usadas três geometrias de ferramenta diferentes: helicoidal, Brad e Step. Os materiais utilizados nas ferramentas, foram o aço rápido (HSS) e o carboneto de tungsténio para as brocas helicoidais de 118º de ângulo de ponta e apenas o carboneto de tungsténio para as brocas Brad e Step. As ferramentas em diamante não foram consideradas neste trabalho, pois, embora sejam reconhecidas as suas boas características para a maquinagem de compósitos, o seu elevado custo não justifica a sua escolha, pelo menos num trabalho académico, como é o caso. As vantagens e desvantagens de cada geometria ou material utilizado foram avaliadas, tanto no que diz respeito à delaminação como á resistência mecânica dos provetes ensaiados. Para a determinação dos valores de delaminação, foi usada a técnica de Raio X. Algum conhecimento já existente relativamente a este processo permitiu definir alguns parâmetros (por exemplo: tempo de exposição das placas ao liquido contrastante), que tornaram acessível o procedimento de obtenção de imagens das placas furadas. Importando estas imagens para um software de desenho (no caso – AutoCad), foi possível medir as áreas delaminadas e chegar a valores para o fator de delaminação de cada furo efetuado. Terminado este processo, todas as placas foram sujeitas a ensaios de esmagamento, de forma a avaliar a forma como os parâmetros de maquinagem afectaram a resistência mecânica do material. De forma resumida, são objetivos deste trabalho: - Caracterizar as condições de corte em materiais compósitos, mais especificamente em fibras de carbono reforçado com matriz epóxida (PRFC); - Caracterização dos danos típicos provocados pela furação destes materiais; - Desenvolvimento de análise não destrutiva (RX) para avaliação dos danos provocados pela furação; - Conhecer modelos existentes com base na mecânica da fratura linear elástica (LEFM); - Definição de conjunto de parâmetros ideais de maquinagem com o fim de minimizar os danos resultantes da mesma, tendo em conta os resultados provenientes dos ensaios de força, da análise não destrutiva e da comparação com modelos de danos existentes e conhecidos.
Resumo:
BACKGROUND: The hospital environment has many occupational health risks that predispose healthcare workers to various kinds of work accidents. OBJECTIVE: This study aims to compare different methods for work accidents investigation and to verify their suitability in hospital environment. METHODS: For this purpose, we selected three types of accidents that were related with needle stick, worker fall and inadequate effort/movement during the mobilization of patients. A total of thirty accidents were analysed with six different work accidents investigation methods. RESULTS: The results showed that organizational factors were the group of causes which had the greatest impact in the three types of work accidents. CONCLUSIONS: The methods selected to be compared in this paper are applicable and appropriate for the work accidents investigation in hospitals. However, the Registration, Research and Analysis of Work Accidents method (RIAAT) showed to be an optimal technique to use in this context.
Resumo:
All over the world, the liberalization of electricity markets, which follows different paradigms, has created new challenges for those involved in this sector. In order to respond to these challenges, electric power systems suffered a significant restructuring in its mode of operation and planning. This restructuring resulted in a considerable increase of the electric sector competitiveness. Particularly, the Ancillary Services (AS) market has been target of constant renovations in its operation mode as it is a targeted market for the trading of services, which have as main objective to ensure the operation of electric power systems with appropriate levels of stability, safety, quality, equity and competitiveness. In this way, with the increasing penetration of distributed energy resources including distributed generation, demand response, storage units and electric vehicles, it is essential to develop new smarter and hierarchical methods of operation of electric power systems. As these resources are mostly connected to the distribution network, it is important to consider the introduction of this kind of resources in AS delivery in order to achieve greater reliability and cost efficiency of electrical power systems operation. The main contribution of this work is the design and development of mechanisms and methodologies of AS market and for energy and AS joint market, considering different management entities of transmission and distribution networks. Several models developed in this work consider the most common AS in the liberalized market environment: Regulation Down; Regulation Up; Spinning Reserve and Non-Spinning Reserve. The presented models consider different rules and ways of operation, such as the division of market by network areas, which allows the congestion management of interconnections between areas; or the ancillary service cascading process, which allows the replacement of AS of superior quality by lower quality of AS, ensuring a better economic performance of the market. A major contribution of this work is the development an innovative methodology of market clearing process to be used in the energy and AS joint market, able to ensure viable and feasible solutions in markets, where there are technical constraints in the transmission network involving its division into areas or regions. The proposed method is based on the determination of Bialek topological factors and considers the contribution of the dispatch for all services of increase of generation (energy, Regulation Up, Spinning and Non-Spinning reserves) in network congestion. The use of Bialek factors in each iteration of the proposed methodology allows limiting the bids in the market while ensuring that the solution is feasible in any context of system operation. Another important contribution of this work is the model of the contribution of distributed energy resources in the ancillary services. In this way, a Virtual Power Player (VPP) is considered in order to aggregate, manage and interact with distributed energy resources. The VPP manages all the agents aggregated, being able to supply AS to the system operator, with the main purpose of participation in electricity market. In order to ensure their participation in the AS, the VPP should have a set of contracts with the agents that include a set of diversified and adapted rules to each kind of distributed resource. All methodologies developed and implemented in this work have been integrated into the MASCEM simulator, which is a simulator based on a multi-agent system that allows to study complex operation of electricity markets. In this way, the developed methodologies allow the simulator to cover more operation contexts of the present and future of the electricity market. In this way, this dissertation offers a huge contribution to the AS market simulation, based on models and mechanisms currently used in several real markets, as well as the introduction of innovative methodologies of market clearing process on the energy and AS joint market. This dissertation presents five case studies; each one consists of multiple scenarios. The first case study illustrates the application of AS market simulation considering several bids of market players. The energy and ancillary services joint market simulation is exposed in the second case study. In the third case study it is developed a comparison between the simulation of the joint market methodology, in which the player bids to the ancillary services is considered by network areas and a reference methodology. The fourth case study presents the simulation of joint market methodology based on Bialek topological distribution factors applied to transmission network with 7 buses managed by a TSO. The last case study presents a joint market model simulation which considers the aggregation of small players to a VPP, as well as complex contracts related to these entities. The case study comprises a distribution network with 33 buses managed by VPP, which comprises several kinds of distributed resources, such as photovoltaic, CHP, fuel cells, wind turbines, biomass, small hydro, municipal solid waste, demand response, and storage units.
Resumo:
Sulfadimethoxine (SDM) is one of the drugs, often used in the aquaculture sector to prevent the spread of disease in freshwater fish aquaculture. Its spread through the soil and surface water can contribute to an increase in bacterial resistance. It is therefore important to control this product in the environment. This work proposes a simple and low-cost potentiometric device to monitor the levels of SDM in aquaculture waters, thus avoiding its unnecessary release throughout the environment. The device combines a micropipette tip with a PVC membrane selective to SDM, prepared from an appropriate cocktail, and an inner reference solution. The membrane includes 1% of a porphyrin derivative acting as ionophore and a small amount of a lipophilic cationic additive (corresponding to 0.2% in molar ratio). The composition of the inner solution was optimized with regard to the kind and/or concentration of primary ion, chelating agent and/or a specific interfering charged species, in different concentration ranges. Electrodes constructed with inner reference solutions of 1 × 10−8 mol/L SDM and 1 × 10−4 mol/L chromate ion showed the best analytical features. Near-Nernstian response was obtained with slopes of −54.1 mV/decade, an extraordinary detection limit of 7.5 ng/mL (2.4 × 10−8 mol/L) when compared with other electrodes of the same type. The reproducibility, stability and response time are good and even better than those obtained by liquid contact ISEs. Recovery values of 98.9% were obtained from the analysis of aquaculture water samples.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.
Resumo:
This work presents a low cost RTK-GPS system for localization of unmanned surface vehicles. The system is based on the use of standard low cost L1 band receivers and in the RTKlib open source software library. Mission scenarios with multiple robotic vehicles are addressed as the ones envisioned in the ICARUS search and rescue case where the possibility of having a moving RTK base on a large USV and multiple smaller vehicles acting as rovers in a local communication network allows for local relative localization with high quality. The approach is validated in operational conditions with results presented for moving base scenario. The system was implemented in the SWIFT USV with the ROAZ autonomous surface vehicle acting as a moving base. This setup allows for the performing of a missions in a wider range of environments and applications such as precise 3D environment modeling in contained areas and multiple robot operations.
Resumo:
O elevado consumo de água associado à escassez deste recurso contribuiu para que alternativas de reutilização/reciclagem de água fossem estudadas que permitam diminuir o seu consumo e minimizar a dependência das indústrias. Monitorizar e avaliar os consumos de água, a nível industrial, é imprescindível para assegurar uma gestão sustentável dos recursos hídricos, sendo este o objetivo da presente dissertação. As alternativas encontradas na unidade industrial em estudo foram a substituição do equipamento sanitário e o aproveitamento do efluente tratado para operações de lavagem e/ou arrefecimento por contacto direto. A maioria do equipamento sanitário não é eficiente, tendo-se proposto a substituição desse sistema por um de menor consumo que permitirá uma poupança de 30 % no consumo de água, que corresponderá a 12 149,37 €/ano, sendo o retorno do investimento estimado em 3 meses. O efluente industrial na entrada da ETAR e nas diferentes etapas - tratamento primário de coagulação/floculação; tratamento secundário ou biológico em SBR; tratamento terciário de coagulação/floculação - foi caracterizado através da medição da temperatura, pH, oxigénio dissolvido e pela determinação da cor, turvação, sólidos suspensos totais (SST), azoto total, carência química de oxigénio (CQO), Carência Bioquímica de Oxigénio ao fim de 5 dias (CBO5) e razão CBO5/CQO. Esta caracterização permitiu avaliar o efluente industrial bruto que se caracteriza por um pH alcalino (8,3 ± 1,7); condutividade baixa (451 ± 200,2 μS/cm); elevada turvação (11 255 ± 8812,8 FTU); cor aparente (63 670 ± 42293,4 PtCo) e cor verdadeira (33 621 ± 19547,9 PtCo) elevadas; teores elevados de CQO (24 753 ± 11806,7 mg/L O2) SST (5 164 ± 3845,5 mg/L) e azoto total (718 mg/L) e um índice de biodegradabilidade baixo (razão CBO5/CQO de 1,4). Este estudo permitiu verificar que a eficiência global do tratamento do efluente foi 82 % na remoção da turvação, 83 % na remoção da cor aparente, 96 % na remoção da cor verdadeira, 85 % na remoção da CQO e 30 % na remoção dos SST. Quanto às eficiências de remoção associadas ao tratamento primário no que diz respeito à turvação, cor aparente, CQO e SST, apresentam valores inferiores aos referidos na literatura para o mesmo tipo de tratamento em efluentes similares. As eficiências de remoção obtidas no tratamento secundário são inferiores às do tratamento primário: turvação, cor aparente, CQO e SST, pelo que procurou-se otimizar a primeira etapa do processo de tratamento Neste estudo de otimização estudou-se a influência de cinco coagulantes – Sulfato de Alumínio, PAX XL – 10, PAX 18, cloreto de ferro e a conjugação de PAX 18 com sulfato de ferro - e seis floculantes – Superfloc A 150, Superfloc A 130, PA 1020, Ambifloc 560, Ambifloc C58 e Rifloc 54 - no tratamento físico-químico do efluente. O PAX 18 e o Ambifloc 560 UUJ foram os que apresentaram as mais elevadas eficiências de remoção (99,85 % na cor, 99,87 % na turvação, 90,12 % na CQO e 99,87 % nos SST). O custo associado a este tratamento é de 1,03 €/m3. Pela comparação com os critérios de qualidade no guia técnico ERSAR, apenas o parâmetro da CQO excede o valor, contudo o valor obtido permite diminuir os custos associados a um tratamento posterior para remoção da CQO remanescente no efluente residual tratado.
Resumo:
A presente dissertação teve como objetivo fazer uma análise da viabilidade técnica da utilização dos condutores de alta temperatura nas linhas aéreas de MT, identificar vantagens, analisar inconvenientes, e estabelecer um comparativo a custos médios com as soluções convencionais. Foi efetuado o estudo de um caso real da EDP Distribuição que consistia na necessidade do aumento da capacidade de transporte de energia da linha aérea a 15 kV Espinho-Sanguedo. Neste foi ponderada a solução onde se poderia efetuar passagem de linha simples para linha dupla em alumínio-aço (AA) 160 mm2 ou a solução alternativa e inovadora de substituição dos condutores existentes por condutores de alta temperatura ACCC 182 mm2. Para isso foram efetuados cálculos e também criada uma ferramenta de apoio à decisão, para validação dos mesmos, com o intuito de mais tarde poder ser aplicada nas linhas aéreas em Média Tensão em todo o país e, sempre que necessário, se possa fazer um estudo de ponderação técnica de forma sistemática e estruturada. Neste trabalho estão identificadas as vantagens, foram relatados os inconvenientes, e estabeleceu-se um comparativo a custos médios da utilização de condutores de alta temperatura com as soluções convencionais. Antes de poder ser realizado um estudo do caso concreto da Linha aérea Espinho-Sanguedo foi necessário um aprofundamento do estado da arte no que diz respeito à comparação entre o cabo de alta temperatura ACCC e o cabo convencional ACSR, sendo este o mais utilizado nas linhas aéreas em MT. Os cabos de alta temperatura trouxeram inovações neste tema de transporte de energia, e como tal surgiu a necessidade de um estudo mais aprofundado da sua constituição, destacando o seu núcleo formado pelo compósito de fibra de carbono e fibra de vidro. Foi também analisado vantagens e desvantagens do cabo de alta temperatura e até mesmo situações onde a sua aplicação poderá ser vantajosa, de modo a tirar proveito das suas caraterísticas em que se destacam altas temperaturas de funcionamento e flechas reduzidas. Para elaborar um projeto de uma linha aérea em média tensão é necessário considerar a legislação em vigor, os aspetos ambientais e económicos, respeitando e garantindo as premissas do cálculo elétrico e mecânico. Economicamente este tipo de cabo (ACCC) é mais dispendioso do que os convencionais, no entanto o estudo realizado permitiu perceber que a sua implementação técnica é vantajosa em linhas aéreas de elevada capacidade de transporte de energia, sobretudo nos casos onde serão necessárias instalar linhas duplas ou linhas simples de seções elevadas. Devido às suas caraterísticas mecânicas, estes cabos permitem melhorar as linhas na sua dimensão, podendo diminuir o número de apoios a instalar, podendo diminuir a robustez dos apoios e permitir maior facilidade na montagem. Estas vantagens traduzem-se em menores impactos ambientais e permitem sobretudo reduzir os constrangimentos com os proprietários dos terrenos onde os apoios são implantados.
Resumo:
In the present work, the development of a genosensor for the event-specific detection of MON810 transgenic maize is proposed. Taking advantage of nanostructuration, a cost-effective three dimensional electrode was fabricated and a ternary monolayer containing a dithiol, a monothiol and the thiolated capture probe was optimized to minimize the unspecific signals. A sandwich format assay was selected as a way of precluding inefficient hybridization associated with stable secondary target structures. A comparison between the analytical performance of the Au nanostructured electrodes and commercially available screen-printed electrodes highlighted the superior performance of the nanostructured ones. Finally, the genosensor was effectively applied to detect the transgenic sequence in real samples, showing its potential for future quantitative analysis.
Resumo:
Due to their detrimental effects on human health, scientific interest in ultrafine particles (UFP), has been increasing but available information is far from comprehensive. Children, who represent one of the most susceptible subpopulation, spend the majority of time in schools and homes. Thus, the aim of this study is to (1) assess indoor levels of particle number concentrations (PNC) in ultrafine and fine (20–1000 nm) range at school and home environments and (2) compare indoor respective dose rates for 3- to 5-yr-old children. Indoor particle number concentrations in range of 20–1000 nm were consecutively measured during 56 d at two preschools (S1 and S2) and three homes (H1–H3) situated in Porto, Portugal. At both preschools different indoor microenvironments, such as classrooms and canteens, were evaluated. The results showed that total mean indoor PNC as determined for all indoor microenvironments were significantly higher at S1 than S2. At homes, indoor levels of PNC with means ranging between 1.09 × 104 and 1.24 × 104 particles/cm3 were 10–70% lower than total indoor means of preschools (1.32 × 104 to 1.84 × 104 particles/cm3). Nevertheless, estimated dose rates of particles were 1.3- to 2.1-fold higher at homes than preschools, mainly due to longer period of time spent at home. Daily activity patterns of 3- to 5-yr-old children significantly influenced overall dose rates of particles. Therefore, future studies focusing on health effects of airborne pollutants always need to account for children’s exposures in different microenvironments such as homes, schools, and transportation modes in order to obtain an accurate representation of children overall exposure.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.