918 resultados para Benefit cost analysis
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil Gestão de Sistemas Ambientais
Resumo:
Mestrado em engenharia electrotécnica e de computadores - Área de Especialização de Sistemas Autónomos
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Wireless Body Area Network (WBAN) is the most convenient, cost-effective, accurate, and non-invasive technology for e-health monitoring. The performance of WBAN may be disturbed when coexisting with other wireless networks. Accordingly, this paper provides a comprehensive study and in-depth analysis of coexistence issues and interference mitigation solutions in WBAN technologies. A thorough survey of state-of-the art research in WBAN coexistence issues is conducted. The survey classified, discussed, and compared the studies according to the parameters used to analyze the coexistence problem. Solutions suggested by the studies are then classified according to the followed techniques and concomitant shortcomings are identified. Moreover, the coexistence problem in WBAN technologies is mathematically analyzed and formulas are derived for the probability of successful channel access for different wireless technologies with the coexistence of an interfering network. Finally, extensive simulations are conducted using OPNET with several real-life scenarios to evaluate the impact of coexistence interference on different WBAN technologies. In particular, three main WBAN wireless technologies are considered: IEEE 802.15.6, IEEE 802.15.4, and low-power WiFi. The mathematical analysis and the simulation results are discussed and the impact of interfering network on the different wireless technologies is compared and analyzed. The results show that an interfering network (e.g., standard WiFi) has an impact on the performance of WBAN and may disrupt its operation. In addition, using low-power WiFi for WBANs is investigated and proved to be a feasible option compared to other wireless technologies.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
RESUMO - No contexto económico actual, os custos pelos acidentes devem ser tidos em conta por todos os gestores das organizações, com especial destaque ao sector da saúde. Assim a análise económica deste estudo visa alertar para o impacto económico dos acidentes de trabalho em contexto hospitalar e sensibilizar os gestores para a análise do custo-beneficio da prevenção. Existem custos facilmente constatáveis, tais como, o tempo perdido no dia do acidente, quer pelo sinistrado quer pelos colegas de trabalho que o assistem, as despesas de uma ida ao serviço de urgência, a paragem da produção, a formação de mão-de-obra alternativa, a substituição dos trabalhadores, o pagamento de horas extras, o restabelecimento dos trabalhadores, os salários pagos aos trabalhadores sinistrados, as despesas administrativas e o aumento do prémio do seguro, entre outros. Existem outros custos que não são tão evidentes e por conseguinte, dificilmente quantificáveis, como é o caso da deterioração da imagem da empresa e o impacto sentimental que estes provocam nos colegas de trabalho que se traduz em quebras de produtividade. A análise económica foi realizada tendo em conta a definição de várias variáveis, de várias rubricas de custos pertencentes ao mesmo domínio. Neste projecto pretende-se analisar o custo global da sinistralidade segundo três ópticas distintas. A óptica da variabilidade, da imputabilidade e da responsabilidade, de forma a ser possível obter o custo marginal devido à ocorrência de um novo acidente, o montante de custos assumidos pelas empresas e os custos unitários segundo a natureza e a localização da lesão. ---------- ABSTRACT - In the current economic context, the costs originated by labour accidents must be taken in account by all the managers of the organisations, in this case, especially by the health sector. Thus, the economic analysis of this study case aims, to alert for the economic impact of the industrial accidents and motivate the managers for the analysis of the cost-benefit for prevention. There are kinds of costs easily quantified such as, the lost time in the day of the accident, expenses in the urgencies service, production interruption, workforce formation, workers’ substitution, extra work payment, employers’ healing, wages paid to injured workers’, administrative expenses and a biggest insurers’’ prime, among other things. The economic analysis of the labour injuries, was developed taking in account the definition of some variables, of some cost categories which belong to same domain. In this project we pretend to analyse the global cost labour injuries according to three distinct optics: variability, imputability and responsibility. Thus, it became possible to get the cost due to an occurrence of a new accident, the unitary sum of costs assumed by the companies and costs according to nature and the localisation of the injury.
Risk Acceptance in the Furniture Sector: Analysis of Acceptance Level and Relevant Influence Factors
Resumo:
Risk acceptance has been broadly discussed in relation to hazardous risk activities and/or technologies. A better understanding of risk acceptance in occupational settings is also important; however, studies on this topic are scarce. It seems important to understand the level of risk that stakeholders consider sufficiently low, how stakeholders form their opinion about risk, and why they adopt a certain attitude toward risk. Accordingly, the aim of this study is to examine risk acceptance in regard to occupational accidents in furniture industries. The safety climate analysis was conducted through the application of the Safety Climate in Wood Industries questionnaire. Judgments about risk acceptance, trust, risk perception, benefit perception, emotions, and moral values were measured. Several models were tested to explain occupational risk acceptance. The results showed that the level of risk acceptance decreased as the risk level increased. High-risk and death scenarios were assessed as unacceptable. Risk perception, emotions, and trust had an important influence on risk acceptance. Safety climate was correlated with risk acceptance and other variables that influence risk acceptance. These results are important for the risk assessment process in terms of defining risk acceptance criteria and strategies to reduce risks.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
Sulfadimethoxine (SDM) is one of the drugs, often used in the aquaculture sector to prevent the spread of disease in freshwater fish aquaculture. Its spread through the soil and surface water can contribute to an increase in bacterial resistance. It is therefore important to control this product in the environment. This work proposes a simple and low-cost potentiometric device to monitor the levels of SDM in aquaculture waters, thus avoiding its unnecessary release throughout the environment. The device combines a micropipette tip with a PVC membrane selective to SDM, prepared from an appropriate cocktail, and an inner reference solution. The membrane includes 1% of a porphyrin derivative acting as ionophore and a small amount of a lipophilic cationic additive (corresponding to 0.2% in molar ratio). The composition of the inner solution was optimized with regard to the kind and/or concentration of primary ion, chelating agent and/or a specific interfering charged species, in different concentration ranges. Electrodes constructed with inner reference solutions of 1 × 10−8 mol/L SDM and 1 × 10−4 mol/L chromate ion showed the best analytical features. Near-Nernstian response was obtained with slopes of −54.1 mV/decade, an extraordinary detection limit of 7.5 ng/mL (2.4 × 10−8 mol/L) when compared with other electrodes of the same type. The reproducibility, stability and response time are good and even better than those obtained by liquid contact ISEs. Recovery values of 98.9% were obtained from the analysis of aquaculture water samples.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
4th International Conference on Climbing and Walking Robots - From Biology to Industrial Applications
Resumo:
O presente trabalho pretende mostrar que a aplicação de medidas de conservação de energia (MCE) pode representar uma redução da intensidade de utilização de matérias-primas na construção de um edifício. Mais concretamente, pode representar uma redução da utilização de materiais e equipamentos, e como consequência, uma redução no esforço económico ao primeiro investimento. Podendo posteriormente representar uma redução na utilização de energia durante o período de funcionamento do edifício. A aplicação de MCE no sector da construção tem vindo a ser uma prática corrente nos novos edifícios e edifícios sujeitos a grandes intervenções de reabilitação. Esta prática deve-se à obrigatoriedade de cumprimento de requisitos regulamentares aplicados à otimização do desempenho energético dos edifícios e dos seus sistemas técnicos, nomeadamente, o RCCTE e o RSECE, entretanto revogados pelo REH e pelo RECS, respetivamente. A implementação de MCE apresenta, na maioria dos casos, benefícios económicos para o promotor do edifício, uma vez que se traduz muitas vezes, na otimização do dimensionamento dos sistemas de Aquecimento, Ventilação e Ar Condicionado (AVAC). Esta otimização permite reduzir os custos associados ao primeiro investimento, bem como na utilização de energia por parte do utilizador, logo na redução dos custos de exploração. No entanto, a falta de quantificação dos impactos do dimensionamento dos sistemas AVAC, da redução de utilização de energia e da análise do custo-benefício da sua aplicação pode condicionar o interesse na sua implementação. Neste contexto, surge a presente dissertação, por iniciativa do Instituto Soldadura e Qualidade (ISQ), aplicado a um caso prático de um edifício já construído e propriedade daquela empresa. Com este trabalho pretende-se avaliar o contributo efetivo das MCE implementadas na fase de projeto e na fase de construção, quer na otimização da dimensão de sistemas e equipamentos AVAC, por via da redução das necessidades energéticas, quer na redução de utilização de energia, permitindo, de seguida, uma avaliação custo-benefício.Na base do caso de estudo está o ECOTERMOLAB, o edifício acima referido, adquirido pelo ISQ para instalação de um laboratório de formação, investigação e desenvolvimento na área da energia. Após aquisição pelo ISQ, o edifício sofreu várias alterações/beneficiações, entre as quais a implementação de MCE, tais como, a aplicação de isolamento térmico na envolvente opaca (paredes, pavimentos e coberturas), duplicação dos vãos envidraçados simples, conferindo-lhes melhores caraterísticas térmicas, e pela aplicação de proteções solar. Foram ainda adotadas MCE aos sistemas AVAC, designadamente, pela adoção de recuperadores de calor nas Unidades de Tratamento de Ar Novo (UTAN’s) e de variadores de velocidade nas bombas de circulação de água e nos ventiladores de ar das UTAN’s. Pretendia o ISQ concluir se a aplicação de todas as MCE contribuiu de forma efetiva para o dimensionamento de sistemas e equipamentos AVAC de menor capacidade e, consequentemente, numa redução de utilização de energia. Em sequência, pretendia avaliar a viabilidade económica da aplicação de todas as MCE, estimando o sobrecusto inicial e o tempo necessário para o retorno financeiro daquele investimento. Para alcançar os objetivos propostos, procedeu-se à simulação energética dinâmica do ECOTERMOLAB, utilizando o programa EnergyPlus. Primeiro foi simulada uma situação base do edifício, sem quaisquer MCE. Posteriormente foi caraterizada cada uma das situações de aplicação das MCE, com o objetivo de avaliar o respetivo impacto individual na utilização de energia pelos sistemas AVAC. Por último foram assumidas todas as soluções em conjunto para avaliar o impacto final de todas as MCE na utilização de energia dos sistemas AVAC, bem como no seu dimensionamento. Das simulações dinâmicas foram obtidos os valores das necessidades de aquecimento e arrefecimento, de energia utilizada pelos sistemas AVAC e de caudais de água aquecida e arrefecida circulada. Com estes valores foi feita uma estimativa de dimensionamento dos equipamentos e componentes AVAC para as situações da aplicação de todas as MCE no ECOTERMOLAB e a sua ausência. A partir da diferença dos custos de aquisição dos respetivos equipamentos e dos valores de poupança em energia foi realizado o estudo da viabilidade económica da implementação das MCE neste edifício. Este estudo permitiu concluir que a aplicação das MCE no ECOTERMOLAB levou à redução da dimensão na generalidade dos equipamentos e componentes AVAC. Permitiu, ainda, concluir que houve uma diminuição de utilização de energia por parte destes sistemas e equipamentos para o aquecimento e arrefecimento. Conclui-se ainda que o período de retorno (Payback) do sobrecusto inicial, estimado em 37.822€ é de, aproximadamente, onze anos e meio, para um valor atual líquido (VAL) de 8.061€ e à taxa interna de rentabilidade (TIR) de 7,03%.
Resumo:
The Azores archipelago is a zone with a vast cultural heritage, presenting a building stock mainly constructed in traditional stone masonry. It is known that this type of construction exhibits poor behaviour under seismic excitations; however it is extensively used in seismic prone areas, such as this case. The 9th of July of 1998 earthquake was the last seismic event in the islands, leaving many traditional stone constructions severely damaged or totally destroyed. This scenario led to an effort by the local government of improving the seismic resistance of these constructions, with the application of several reinforcement techniques. This work aims to study some of the most used reinforcement schemes after the 1998 earthquake, and to assess their effectiveness in the mitigation of the construction’s seismic vulnerability. A brief evaluation of the cost versus benefit of these retrofitting techniques is also made, seeking to identify those that are most suitable for each building typology. Thus, it was sought to analyze the case of real structures with different geometrical and physical characteristics, by establishing a comparison between the seismic performance of reinforced and non-reinforced structures. The first section contains the analysis of a total of six reinforcement scenarios for each building chosen. Using the recorded 1998 earthquake accelerograms, a linear time-history analysis was performed for each reinforcement scenario. A comparison was then established between the maximum displacements, inter-storey drift and maximum stress obtained, in order to evaluate the global seismic response of each reinforced structure. In the second part of the work, the examination of the performance obtained in the previous section, in relation to the cost of implementing each reinforcement technique, allowed to draw conclusions concerning the viability of implementing each reinforcement method, based on the book value of the buildings in study.