914 resultados para Performance comparison
Resumo:
In this paper, dynamic simulation was used to compare the energy performance of three innovativeHVAC systems: (A) mechanical ventilation with heat recovery (MVHR) and micro heat pump, (B) exhaustventilation with exhaust air-to-water heat pump and ventilation radiators, and (C) exhaust ventilationwith air-to-water heat pump and ventilation radiators, to a reference system: (D) exhaust ventilation withair-to-water heat pump and panel radiators. System A was modelled in MATLAB Simulink and systems Band C in TRNSYS 17. The reference system was modelled in both tools, for comparison between the two.All systems were tested with a model of a renovated single family house for varying U-values, climates,infiltration and ventilation rates.It was found that A was the best system for lower heating demand, while for higher heating demandsystem B would be preferable. System C was better than the reference system, but not as good as A or B.The difference in energy consumption of the reference system was less than 2 kWh/(m2a) betweenSimulink and TRNSYS. This could be explained by the different ways of handling solar gains, but also bythe fact that the TRNSYS systems supplied slightly more than the ideal heating demand.
Resumo:
The objective of this thesis is the analysis and the study of the various access techniques for vehicular communications, in particular of the C-V2X and WAVE protocols. The simulator used to study the performance of the two protocols is called LTEV2Vsim and was developed by the CNI IEIIT for the study of V2V (Vehicle-to-Vehicle) communications. The changes I made allowed me to study the I2V (Infrastructure-to-Vehicle) scenario in highway areas and, with the results obtained, I made a comparison between the two protocols in the case of high vehicular density and low vehicular density, putting in relation to the PRR (packet reception ratio) and the cell size (RAW, awareness range). The final comparison allows to fully understand the possible performances of the two protocols and highlights the need for a protocol that allows to reach the minimum necessary requirements.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
The objective of this master's thesis is to evaluate the optimum performance of sixsectored hexagonal layout of WCDMA (UMTS) network and analyze the performance at the optimum point. The maximum coverage and the maximum capacity are the main concern of service providers and it is always a challenging task for them to achieve economically. Because the optimum configuration of a network corresponds to a configuration which minimizes the number of sites required to provide a target service probability in the planning area which in turn reduces the deployment cost. The optimum performance means the maximum cell area and themaximum cell capacity the network can provide at the maximum antenna height satisfying the target service probability. Hexagon layout has been proven as the best layout for the cell deployment. In this thesis work, two different configurations using six-sectored sites have been considered for the performance comparison. In first configuration, each antenna is directed towards each corner of hexagon, whereas in second configurationeach antenna is directed towards each side of hexagon. The net difference in the configurations is the 30 degree rotation of antenna direction. The only indoor users in a flat and smooth semi-urban environment area have been considered for the simulation purpose where the traffic distribution is 100 Erl/km2 with 12.2 kbps speech service having maximum mobile speed of 3 km/hr. The simulation results indicate that a similar performance can be achieved in both the configurations, that is, a maximum of 947 m cellrange at antenna height of 49.5 m can be achieved when the antennas are directed towards the corner of hexagon, whereas 943.3 m cell range atantenna height of 54 m can be achieved when the antennas are directed towards the side of hexagon. However, from the interference point of view the first configuration provides better results. The simulation results also show that the network is coverage limited in both the uplink and downlink direction at the optimum point.
Resumo:
ABSTRACT: This paper presents a performance comparison between known propagation Models through least squares tuning algorithm for 5.8 GHz frequency band. The studied environment is based on the 12 cities located in Amazon Region. After adjustments and simulations, SUI Model showed the smaller RMS error and standard deviation when compared with COST231-Hata and ECC-33 models.
Resumo:
Mass spectrometry (MS) data provide a promising strategy for biomarker discovery. For this purpose, the detection of relevant peakbins in MS data is currently under intense research. Data from mass spectrometry are challenging to analyze because of their high dimensionality and the generally low number of samples available. To tackle this problem, the scientific community is becoming increasingly interested in applying feature subset selection techniques based on specialized machine learning algorithms. In this paper, we present a performance comparison of some metaheuristics: best first (BF), genetic algorithm (GA), scatter search (SS) and variable neighborhood search (VNS). Up to now, all the algorithms, except for GA, have been first applied to detect relevant peakbins in MS data. All these metaheuristic searches are embedded in two different filter and wrapper schemes coupled with Naive Bayes and SVM classifiers.
Resumo:
The main purpose of this work is to describe the case of an online Java Programming course for engineering students to learn computer programming and to practice other non-technicalabilities: online training, self-assessment, teamwork and use of foreign languages. It is important that students develop confidence and competence in these skills, which will be required later in their professional tasks and/or in other engineering courses (life-long learning). Furthermore, this paper presents the pedagogical methodology, the results drawn from this experience and an objective performance comparison with another conventional (face-to-face) Java course.
Resumo:
O presente trabalho objetiva avaliar o desempenho do MECID (Método dos Elementos de Contorno com Interpolação Direta) para resolver o termo integral referente à inércia na Equação de Helmholtz e, deste modo, permitir a modelagem do Problema de Autovalor assim como calcular as frequências naturais, comparando-o com os resultados obtidos pelo MEF (Método dos Elementos Finitos), gerado pela Formulação Clássica de Galerkin. Em primeira instância, serão abordados alguns problemas governados pela equação de Poisson, possibilitando iniciar a comparação de desempenho entre os métodos numéricos aqui abordados. Os problemas resolvidos se aplicam em diferentes e importantes áreas da engenharia, como na transmissão de calor, no eletromagnetismo e em problemas elásticos particulares. Em termos numéricos, sabe-se das dificuldades existentes na aproximação precisa de distribuições mais complexas de cargas, fontes ou sorvedouros no interior do domínio para qualquer técnica de contorno. No entanto, este trabalho mostra que, apesar de tais dificuldades, o desempenho do Método dos Elementos de Contorno é superior, tanto no cálculo da variável básica, quanto na sua derivada. Para tanto, são resolvidos problemas bidimensionais referentes a membranas elásticas, esforços em barras devido ao peso próprio e problemas de determinação de frequências naturais em problemas acústicos em domínios fechados, dentre outros apresentados, utilizando malhas com diferentes graus de refinamento, além de elementos lineares com funções de bases radiais para o MECID e funções base de interpolação polinomial de grau (um) para o MEF. São geradas curvas de desempenho através do cálculo do erro médio percentual para cada malha, demonstrando a convergência e a precisão de cada método. Os resultados também são comparados com as soluções analíticas, quando disponíveis, para cada exemplo resolvido neste trabalho.
Resumo:
Na atualidade, existe uma quantidade de dados criados diariamente que ultrapassam em muito as mais otimistas espectativas estabelecidas na década anterior. Estes dados têm origens bastante diversas e apresentam-se sobre várias formas. Este novo conceito que dá pelo nome de Big Data está a colocar novos e rebuscados desafios ao seu armazenamento, tratamento e manipulação. Os tradicionais sistemas de armazenamento não se apresentam como a solução indicada para este problema. Estes desafios são alguns dos mais analisados e dissertados temas informáticos do momento. Várias tecnologias têm emergido com esta nova era, das quais se salienta um novo paradigma de armazenamento, o movimento NoSQL. Esta nova filosofia de armazenamento visa responder às necessidades de armazenamento e processamento destes volumosos e heterogéneos dados. Os armazéns de dados são um dos componentes mais importantes do âmbito Business Intelligence e são, maioritariamente, utilizados como uma ferramenta de apoio aos processos de tomada decisão, levados a cabo no dia-a-dia de uma organização. A sua componente histórica implica que grandes volumes de dados sejam armazenados, tratados e analisados tendo por base os seus repositórios. Algumas organizações começam a ter problemas para gerir e armazenar estes grandes volumes de informação. Esse facto deve-se, em grande parte, à estrutura de armazenamento que lhes serve de base. Os sistemas de gestão de bases de dados relacionais são, há algumas décadas, considerados como o método primordial de armazenamento de informação num armazém de dados. De facto, estes sistemas começam a não se mostrar capazes de armazenar e gerir os dados operacionais das organizações, sendo consequentemente cada vez menos recomendada a sua utilização em armazéns de dados. É intrinsecamente interessante o pensamento de que as bases de dados relacionais começam a perder a luta contra o volume de dados, numa altura em que um novo paradigma de armazenamento surge, exatamente com o intuito de dominar o grande volume inerente aos dados Big Data. Ainda é mais interessante o pensamento de que, possivelmente, estes novos sistemas NoSQL podem trazer vantagens para o mundo dos armazéns de dados. Assim, neste trabalho de mestrado, irá ser estudada a viabilidade e as implicações da adoção de bases de dados NoSQL, no contexto de armazéns de dados, em comparação com a abordagem tradicional, implementada sobre sistemas relacionais. Para alcançar esta tarefa, vários estudos foram operados tendo por base o sistema relacional SQL Server 2014 e os sistemas NoSQL, MongoDB e Cassandra. Várias etapas do processo de desenho e implementação de um armazém de dados foram comparadas entre os três sistemas, sendo que três armazéns de dados distintos foram criados tendo por base cada um dos sistemas. Toda a investigação realizada neste trabalho culmina no confronto da performance de consultas, realizadas nos três sistemas.
Resumo:
No contexto da penetração de energias renováveis no sistema elétrico, Portugal ocupa uma posição de destaque a nível mundial, muito devido à produção de eólica. Com um sistema elétrico com forte presença de fontes de energia renováveis, novos desafios surgem, nomeadamente no caso da energia eólica pela sua imprevisibilidade e volatilidade. O recurso eólico embora seja ilimitado não é armazenável, surgindo assim a necessidade da procura de modelos de previsão de produção de energia elétrica dos parques eólicos de modo a permitir uma boa gestão do sistema. Nesta dissertação apresentam-se as contribuições resultantes de um trabalho de pesquisa e investigação sobre modelos de previsão da potência elétrica com base em valores de previsões meteorológicas, nomeadamente, valores previstos da intensidade e direção do vento. Consideraram-se dois tipos de modelos: paramétricos e não paramétricos. Os primeiros são funções polinomiais de vários graus e a função sigmoide, os segundos são redes neuronais artificiais. Para a estimação dos modelos e respetiva validação, são usados dados recolhidos ao longo de dois anos e três meses no parque eólico do Pico Alto de potência instalada de 6 MW. De forma a otimizar os resultados da previsão, consideram-se diferentes classes de perfis de produção, definidas com base em quatro e oito direções do vento, e ajustam-se os modelos propostos em cada uma das classes. São apresentados e discutidos resultados de uma análise comparativa do desempenho dos diferentes modelos propostos para a previsão da potência.
Resumo:
A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.
Resumo:
The classification of Art painting images is a computer vision applications that isgrowing considerably. The goal of this technology, is to classify an art paintingimage automatically, in terms of artistic style, technique used, or its author. For thispurpose, the image is analyzed extracting some visual features. Many articlesrelated with these problems have been issued, but in general the proposed solutionsare focused in a very specific field. In particular, algorithms are tested using imagesat different resolutions, acquired under different illumination conditions. Thatmakes complicate the performance comparison of the different methods. In thiscontext, it will be very interesting to construct a public art image database, in orderto compare all the existing algorithms under the same conditions. This paperpresents a large art image database, with their corresponding labels according to thefollowing characteristics: title, author, style and technique. Furthermore, a tool thatmanages this database have been developed, and it can be used to extract differentvisual features for any selected image. This data can be exported to a file in CSVformat, allowing researchers to analyze the data with other tools. During the datacollection, the tool stores the elapsed time in the calculation. Thus, this tool alsoallows to compare the efficiency, in computation time, of different mathematicalprocedures for extracting image data.
Resumo:
This paper analyses the predictive ability of quantitative precipitation forecasts (QPF) and the so-called "poor-man" rainfall probabilistic forecasts (RPF). With this aim, the full set of warnings issued by the Meteorological Service of Catalonia (SMC) for potentially-dangerous events due to severe precipitation has been analysed for the year 2008. For each of the 37 warnings, the QPFs obtained from the limited-area model MM5 have been verified against hourly precipitation data provided by the rain gauge network covering Catalonia (NE of Spain), managed by SMC. For a group of five selected case studies, a QPF comparison has been undertaken between the MM5 and COSMO-I7 limited-area models. Although MM5's predictive ability has been examined for these five cases by making use of satellite data, this paper only shows in detail the heavy precipitation event on the 9¿10 May 2008. Finally, the "poor-man" rainfall probabilistic forecasts (RPF) issued by SMC at regional scale have also been tested against hourly precipitation observations. Verification results show that for long events (>24 h) MM5 tends to overestimate total precipitation, whereas for short events (¿24 h) the model tends instead to underestimate precipitation. The analysis of the five case studies concludes that most of MM5's QPF errors are mainly triggered by very poor representation of some of its cloud microphysical species, particularly the cloud liquid water and, to a lesser degree, the water vapor. The models' performance comparison demonstrates that MM5 and COSMO-I7 are on the same level of QPF skill, at least for the intense-rainfall events dealt with in the five case studies, whilst the warnings based on RPF issued by SMC have proven fairly correct when tested against hourly observed precipitation for 6-h intervals and at a small region scale. Throughout this study, we have only dealt with (SMC-issued) warning episodes in order to analyse deterministic (MM5 and COSMO-I7) and probabilistic (SMC) rainfall forecasts; therefore we have not taken into account those episodes that might (or might not) have been missed by the official SMC warnings. Therefore, whenever we talk about "misses", it is always in relation to the deterministic LAMs' QPFs.
Resumo:
Työn tavoitteena oli tutkia ja vertailla komponenttipohjaisia ohjelmistoarkkitehtuureita (Microsoft .NET ja J2EE). Työn tarkoituksena oli valita ohjelmistoarkkitehtuuri uudelle neuroverkkopohjaiselle urasuunnittelupalvelulle. Tässä työssä selvitettiin myös, miten luodaan kansainvälistettäviä ja lokalisoitavia sovelluksia, sekä kuinka Web-, Windows-, mobiili-, puhe- ja Digi-TV -käyttöliittymät soveltuvat uudelle urasuunnittelupalvelulle. Tutkimustyössä käytettiin alan kirjallisuutta, Microsoftin ja Sun Microsystemsin Web-sivuja. Tutkimustyössä analysoitiin Microsoft Pet Shop- ja Sun Microsystemsin Java Pet Store -esimerkkisovellusten suorituskykyvertailua. Analyysituloksiin perustuen urasuunnittelupalvelussa suositellaan käytettäväksi J2EE-arkkitehtuuria. Uudelle urasuunnittelupalvelulle toimenpide-ehdotus on komponenttipohjainen järjestelmä Web-, puhe- ja Digi-TV -käyttöliittymillä ja personoidulla sisällöllä. Järjestelmä tehdään viisivaiheisena hankkeena, johon sisältyy pilottitestejä. Uuteen urasuunnittelupalveluun liitetään mukaan opiskelijat, oppilaitokset ja työnantajat sekä asiantuntijoita neuroverkon opetusdatan määrittämiseen. Palvelu perustuu integroituun tietokantaan. Eri osajärjestelmissä tuotettua tietoa voidaan hyödyntää kaikkialla urasuunnittelupalvelussa.
Resumo:
Tämän tutkimuksen tavoitteena on selvittää saavutetaanko passiivisilla arvostrategioilla riskikorjattuna ylisuuria tuottoja Suomen osakemarkkinoilla. Tuottaako matalien tunnuslukujen perusteella valittujen osakkeiden portfolio enemmän kuin korkeiden tunnuslukujen portfolio? Ovatko alfat tilastollisesti merkitseviä? Tutkimuksen tavoitteena on myös selvittää, ovatko korkeimman ja matalimman arvostustason portfolioiden menestyserot tilastollisesti merkitseviä. Tunnuslukuina on tarkasteltu P/E-, EV/EBIT- ja EV/EBITDA-tunnuslukuja sekä P/B- ja P/S–lukuja. Lisäksi on tutkittu yhdistelmätunnuslukuina P/E- ja P/B–lukujen tulon muodostamaa tunnuslukua, sekä suhteellisiin EV/EBITDA-, P/B- ja P/S-lukuihin perustuvaa arvostusmittaria. Tutkimusaineisto koostuu Suomen osakemarkkinoilla julkisesti noteeratuista yrityksistä toukokuusta 1991 toukokuuhun 2006. Osakkeet järjestettiin tunnuslukujen arvostustason perusteella kvintiiliportfolioihin. Myöhemmin portfoliot muodostettiin uudelleen kolmen ja viiden vuoden välein. Lopuksi tarkasteltiin kvintiiliportfolioiden kuukausittaisia tuottoja tunnuslukukohtaisesti koko tutkimusjakson ajalta. Tulosten perusteella on selkeästi havaittavissa, että matalan tunnusluvun kvintiiliportfoliot menestyivät paremmin kuin korkean tunnusluvun kvintiiliportfoliot. Kolmen vuoden jaksoissa P/E-, EV/EBITDA-, P/B-, P/S- ja kolmen tunnusluvun yhdistelmällä muodostetuilla kvintiiliportfolioilla arvopreemio oli selkeästi havaittavissa. Viiden vuoden jaksoissa vastaava ilmiö toistui EV/EBITDA-tunnusluvulla muodostetuilla kvintiiliportfolioilla. Eniten tuotti absoluuttisesti sekä riskikorjattuna tutkimuksessa esiteltävä kolmen vuoden jaksoissa kolmen tunnusluvun yhdistelmällä muodostettu matalimman arvostustason kvintiiliportfolio, joka tuotti riskikorjattuna tilastollisesti erittäin merkitsevästi positiivista alfaa matalalla riskitasolla.