906 resultados para Experimental performance metrics
Resumo:
Commercial kitchens often leave a large carbon footprint. A new dataset of energy performance metrics from a leading industrial partner is presented. Categorising these types of buildings is challenging. Electricity use has been analysed using data from automated meter readings (AMR) for the purpose of benchmarking and discussed in terms of factors such as size and food output. From the analysed results, consumption is found to be almost double previous sector estimates of 6480 million kWh per year. Recommendations are made to further improve the current benchmarks in order to attain robust, reliable and transparent figures, such as the introduction of normalised performance indicators to include kitchen size (m2) and kWh per thousand-pound turnover.
Resumo:
This conference paper outlines the operation and some of the preliminary physics results using the GSI RISING active stopper. Data are presented from an experiment using combined isomer and beta‐delayed gamma‐ray spectroscopy to study low‐lying spectral and decay properties of heavy‐neutron‐rich nuclei around A∼190 produced following the relativistic projectile fragmentation of 208Pb primary beam. The response of the RISING active stopper detector is demonstrated for both the implantation of heavy secondary fragments and in‐situ decay of beta‐particles. Beta‐delayed gamma‐ray spectroscopy following decays of the neutron‐rich nucleus 194Re is presented to demonstrate the experimental performance of the set‐up. The resulting information inferred from excited states in the W and Os daughter nuclei is compared with results from Skyrme Hartree‐Fock predictions of the evolution of nuclear shape.
Resumo:
In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.
Resumo:
As digital systems move away from traditional desktop setups, new interaction paradigms are emerging that better integrate with users’ realworld surroundings, and better support users’ individual needs. While promising, these modern interaction paradigms also present new challenges, such as a lack of paradigm-specific tools to systematically evaluate and fully understand their use. This dissertation tackles this issue by framing empirical studies of three novel digital systems in embodied cognition – an exciting new perspective in cognitive science where the body and its interactions with the physical world take a central role in human cognition. This is achieved by first, focusing the design of all these systems on a contemporary interaction paradigm that emphasizes physical interaction on tangible interaction, a contemporary interaction paradigm; and second, by comprehensively studying user performance in these systems through a set of novel performance metrics grounded on epistemic actions, a relatively well established and studied construct in the literature on embodied cognition. The first system presented in this dissertation is an augmented Four-in-a-row board game. Three different versions of the game were developed, based on three different interaction paradigms (tangible, touch and mouse), and a repeated measures study involving 36 participants measured the occurrence of three simple epistemic actions across these three interfaces. The results highlight the relevance of epistemic actions in such a task and suggest that the different interaction paradigms afford instantiation of these actions in different ways. Additionally, the tangible version of the system supports the most rapid execution of these actions, providing novel quantitative insights into the real benefits of tangible systems. The second system presented in this dissertation is a tangible tabletop scheduling application. Two studies with single and paired users provide several insights into the impact of epistemic actions on the user experience when these are performed outside of a system’s sensing boundaries. These insights are clustered by the form, size and location of ideal interface areas for such offline epistemic actions to occur, as well as how can physical tokens be designed to better support them. Finally, and based on the results obtained to this point, the last study presented in this dissertation directly addresses the lack of empirical tools to formally evaluate tangible interaction. It presents a video-coding framework grounded on a systematic literature review of 78 papers, and evaluates its value as metric through a 60 participant study performed across three different research laboratories. The results highlight the usefulness and power of epistemic actions as a performance metric for tangible systems. In sum, through the use of such novel metrics in each of the three studies presented, this dissertation provides a better understanding of the real impact and benefits of designing and developing systems that feature tangible interaction.
Resumo:
The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used
Resumo:
The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML
Resumo:
We report on operational experience with an experimental performance of the SLD barrel Cherenkov Ring Imaging Detector from the 1992 and 1993 physics runs. The liquid (C6F14) and gas (C5F12) radiator recirculation systems have performed well, and the drift gas supply system has operated successfully with TMAE for three years. Cherenkov rings have been observed from both the liquid and gas radiators. The number and angular resolution of Cherenkov photons have been measured, and found to be close to design specifications.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
O estudo da perda de propagação, nas cidades da região amazônica, envolve ambiente caracterizado pelo clima tropical e, suburbano densamente arborizado. Levando consideração à importância da faixa ISM 5,8 GHz, esta dissertação apresenta um modelo propagação para a faixa de frequência em questão, agregando as características da atenuação experimentada pela onda de rádio quando se propaga em ambientes de cidades típicas região amazônica. Para tanto, medidas de potência recebida foram coletadas em 335 clientes fixos, distribuídos em 12 cidades na região norte do Brasil, sendo estes atendidos pelo programa de inclusão digital do estado do Pará, Navega Pará. Também foram realizadas medidas com mobilidade no campus da Universidade Federal do Pará (UFPA). Apresenta ainda o desempenho do modelo proposto sobre outros modelos (Modelo SUI e COST231-Hata) descritos na literatura, para redes sem fio fixas e com mobilidade. As métricas desempenho utilizadas foram o erro RMS e o desvio padrão com relação aos dados medidos. O ajuste dos parâmetros do modelo proposto é realizado através do método de mínimos quadrados lineares, aplicado em duas etapas para diminuir a incerteza sobre os parâmetros ajustados. O modelo proposto alcançou um erro RMS de 3,8 dB e desvio padrão de 2,3 dB, superando os demais modelos que obtiveram erros RMS acima de 10 dB e desvios padrão acima de 5 dB. Os resultados obtidos mostram a sua eficiência sobre outros modelos para predição de perdas na faixa de 5,8 GHz em sistemas fixos e móveis.
Resumo:
Este trabalho faz uma análise de desempenho de aplicações triple play através da tecnologia Power Line Communication, fazendo uma abordagem direcionada para qualidade de serviço e qualidade de experiência. Apresenta resultados obtidos em cenários residenciais onde o uso desta tecnologia como última milha mostra-se uma solução passível de implementação diante dos testes realizados com transmissões de chamadas VoIP, transmissões de vídeo em alta definição e dados. O conceito de rede doméstica, interligando todos os pontos de uma casa, vem representando um novo rumo na definição de um padrão global, no qual a transmissão de dados por meio da fiação elétrica será uma das tecnologias empregadas e de maior destaque. Também será mostrado o desempenho das métricas avaliadas como jitter, largura de banda, perda de pacotes, PSNR, MOS, VQM, SSIM e suas correlações.
Resumo:
Esta Tese propõe o desenvolvimento de uma estratégia de planejamento que combina: caracterização de carga de uma aplicação típica de TV Digital, extração de vetor peso por meio de redes de crença e tomada de decisão multicriterio a partir da aplicação de métodos analíticos (TOPSIS e ELECTRE III), para fornecer suporte a decisão junto a provedores de serviços, objetivando-se permitir optar-se por uma tecnologia para canal de retorno (ADSL2+, PLC, WiMAX e 3G), considerando a carga típica de um cenário de TV Digital interativo, padrão ISDB-T. A estratégia proposta apresenta cinco etapas, sendo estas: definição dos canais de retorno e das métricas de desempenho, realização de medições das tecnologias de acesso em cenários reais, simulação dos dados em ambientes simulados, aplicação de técnicas de correlação de dados para geração do vetor peso e aplicação de métodos analíticos de tomada de decisão para escolha da melhor tecnologia a ser implantada em determinado cenário. Como resultado principal se obteve um modelo genérico e flexível que foi validado através de um estudo de caso que ordenou a preferência das tecnologias avaliadas.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.