502 resultados para BENCHMARKS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A series of open source benchmarks for computer performance analysis of personal computers with a focus on computational chemistry calculations is presented. The results returned by these tests are discussed and used to correlate with the actual performance of a set of computers available for research on two computing intensive fields of chemistry, quantum chemical and molecular simulation calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkielman tarkoituksena on selvittää asuntosijoittamisen kilpailukykyisyyttä eläkesäästämismuotona. Vertailu tehdään kvantitatiivisena ja vertailukohtana ovat ns. perinteiset eläkesäästämismuodot, kuten rahastosijoitus, osakesijoitus, eläkevakuutus ja pankkitilisijoitus. Vertailu on tehty neljällä eri sijoittajaprofiililla. Profiilit eroavat toisistaan säästöajan ja säästösumman suhteen. Kaikissa tapauksissa kokonaissäästösumma on 36 000 euroa. Asuntosijoituksen lähtökohtana on osakehuoneisto, joka vuokrataan eteenpäin asumiskäyttöön. Asunnon ostoon otetaan pankkilaina, jota maksetaan pois vuokratuloilla sekä omalla osuudella niin, että kuukausittainen omaosuus lainan lyhennyksestä olisi vastaavan suuruinen kuin vaihtoehtoissijoituksen ns. perinteiseen eläkesäästämistuotteeseen. Tutkielmassa esiintyvät laskelmat on tehty Excel-taulukkolaskentaohjelmalla. Analyysissä keskitytään ensisijaisesti analysoimaan verojen ja kaikkien kulujen jälkeistä sijoitusten loppuarvoa, todellista vuosittaista tuottoprosenttia sekä koko sijoitetun pääoman tuottoprosenttia koko sijoitusajalta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to examine whether Corporate Social Responsibility (CSR) announcements of the three biggest American fast food companies (McDonald’s, YUM! Brands and Wendy’s) have any effect on their stock returns as well as on the returns of the industry index (Dow Jones Restaurants and Bars). The time period under consideration starts on 1st of May 2001 and ends on 17th of October 2013. The stock market reaction is tested with an event study utilizing CAPM. The research employs the daily stock returns of the companies, the index and the benchmarks (NASDAQ and NYSE). The test of combined announcements did not reveal any significant effect on the index and McDonald’s. However the stock returns of Wendy’s and YUM! Brands reacted negatively. Moreover, the company level analyses showed that to their own CSR releases McDonald’s stock returns respond positively, YUM! Brands reacts negatively and Wendy’s does not have any reaction. Plus, it was found that the competitors of the announcing company tend to react negatively to all the events. Furthermore, the division of the events into sustainability categories showed statistically significant negative reaction from the Index, McDonald’s and YUM! Brands towards social announcements. At the same time only the index was positively affected by to the economic and environmental CSR news releases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new area of machine learning research called deep learning, has moved machine learning closer to one of its original goals: artificial intelligence and general learning algorithm. The key idea is to pretrain models in completely unsupervised way and finally they can be fine-tuned for the task at hand using supervised learning. In this thesis, a general introduction to deep learning models and algorithms are given and these methods are applied to facial keypoints detection. The task is to predict the positions of 15 keypoints on grayscale face images. Each predicted keypoint is specified by an (x,y) real-valued pair in the space of pixel indices. In experiments, we pretrained deep belief networks (DBN) and finally performed a discriminative fine-tuning. We varied the depth and size of an architecture. We tested both deterministic and sampled hidden activations and the effect of additional unlabeled data on pretraining. The experimental results show that our model provides better results than publicly available benchmarks for the dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rhythm in the fall of inequality in Brazil is acceptable? Evidences of the historical and international context. The following study uses two approaches to answer the question of whether inequality in Brazil is falling fast enough. The first is to compare the variation of the Gini coefficient in Brazil with what was observed in several countries that today belong to the OCDE (United Kingdom, United States, Netherlands, Sweden, France, Norway, and Spain) while these same countries built their social welfare systems during the last century. The second approach is to calculate for how much Brazil must keep up the fall in the Gini coefficient to attain the same levels of inequality of three OCDE countries that can be used as a reference: Mexico, the United States, and Canada. The data indicate that the Gini coefficient in Brazil is falling 0.7 point per year and that this is superior to the rhythm of all the OCDE countries analyzed while they built their welfare systems but Spain, whose Gini fell 0.9 point per year during the 1950s. The time needed to attain various benchmarks in inequality are: 6 years to Mexico, 12 to the United States and 24 to Canadian inequality levels. The general conclusion is that the speed with which inequality is falling is adequate, but the challenge will be to keep inequality falling at the same rate for another two or three decades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates instructors’ perceptions of reading instruction and difficulties among Language Instruction for Newcomers to Canada (LINC) Level 1-3 learners. Statistics Canada reports that 60% of immigrants possess inadequate literacy skills. Newcomers are placed in classes using the Canadian Language Benchmarks but large, mixed-level classes create little opportunity for individualized instruction, leading some clients to demonstrate little change in their reading benchmarks. Data were collected (via demographic questionnaires, semi-structured interviews, teaching plans, and field study notes) to create a case study of five LINC instructors’ perceptions of why some clients do not progress through the LINC reading levels as expected and how their previous experiences relate to those within the LINC program. Qualitative analyses of the data revealed three primary themes: client/instructor background and classroom needs, reading, strategies, methods and challenges, and assessment expectations and progress, each containing a number of subthemes. A comparison between the themes and literature demonstrated six areas for discussion: (a) some clients, specifically refugees, require more time to progress to higher benchmarks; (b) clients’ level of prior education can be indicative of their literacy skills; (c) clients with literacy needs should be separated and placed into literacy-specific classes; (d) evidence-based approaches to reading instruction were not always evident in participants’ responses, demonstrating a lack of knowledge about these approaches; (e) first language literacy influences second language reading acquisition through a transfer of skills; and (f) collaboration in the classroom supports learning by extending clients’ capabilities. These points form the basis of recommendations about how reading instruction might be improved for such clients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le cancer du poumon a une incidence et une létalité parmi les plus hautes de tous les cancers diagnostiqués au Canada. En considérant la gravité du pronostic et des symptômes de la maladie, l’accès au traitement dans les plus brefs de délais est essentiel. Malgré l’engagement du gouvernement fédéral et les gouvernements provinciaux de réduire les délais de temps d’attente, des balises pour les temps d’attente pour le traitement d’un cancer ne sont toujours pas établis. En outre, le compte-rendu des indicateurs des temps d’attente n’est pas uniforme à travers les provinces. Une des solutions proposées pour la réduction des temps d’attente pour le traitement du cancer est les équipes interdisciplinaires. J’ai complété un audit du programme interdisciplinaire traitant le cancer du poumon à l’Hôpital général juif (l’HGJ) de 2004 à 2007. Les objectifs primaires de l’étude étaient : (1) de faire un audit de la performance de l’équipe interdisciplinaire à l’HGJ en ce qui concerne les temps d’attente pour les intervalles critiques et les sous-groupes de patients ; (2) de comparer les temps d’attente dans la trajectoire clinique des patients traités à l’HGJ avec les balises qui existent ; (3) de déterminer les facteurs associés aux délais plus longs dans cette population. Un objectif secondaire de l’étude était de suggérer des mesures visant à réduire les temps d’attente. Le service clinique à l’HGJ a été évalué selon les balises proposées par le British Thoracic Society, Cancer Care Ontario, et la balise pan-canadienne pour la radiothérapie. Les patients de l’HGJ ont subi un délai médian de 9 jours pour l’intervalle «Ready to treat to first treatment», et un délai médian de 30 jours pour l’intervalle entre le premier contact avec l’hôpital et le premier traitement. Les patients âgés de plus de 65 ans, les patients avec une capacité physique diminuée, et les patients avec un stade de tumeur limité étaient plus à risque d’échouer les balises pour les temps d’attente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La version intégrale de ce mémoire est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l'Université de Montréal (http://www.bib.umontreal.ca/MU).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analyser le code permet de vérifier ses fonctionnalités, détecter des bogues ou améliorer sa performance. L’analyse du code peut être statique ou dynamique. Des approches combinants les deux analyses sont plus appropriées pour les applications de taille industrielle où l’utilisation individuelle de chaque approche ne peut fournir les résultats souhaités. Les approches combinées appliquent l’analyse dynamique pour déterminer les portions à problèmes dans le code et effectuent par la suite une analyse statique concentrée sur les parties identifiées. Toutefois les outils d’analyse dynamique existants génèrent des données imprécises ou incomplètes, ou aboutissent en un ralentissement inacceptable du temps d’exécution. Lors de ce travail, nous nous intéressons à la génération de graphes d’appels dynamiques complets ainsi que d’autres informations nécessaires à la détection des portions à problèmes dans le code. Pour ceci, nous faisons usage de la technique d’instrumentation dynamique du bytecode Java pour extraire l’information sur les sites d’appels, les sites de création d’objets et construire le graphe d’appel dynamique du programme. Nous démontrons qu’il est possible de profiler dynamiquement une exécution complète d’une application à temps d’exécution non triviale, et d’extraire la totalité de l’information à un coup raisonnable. Des mesures de performance de notre profileur sur trois séries de benchmarks à charges de travail diverses nous ont permis de constater que la moyenne du coût de profilage se situe entre 2.01 et 6.42. Notre outil de génération de graphes dynamiques complets, nommé dyko, constitue également une plateforme extensible pour l’ajout de nouvelles approches d’instrumentation. Nous avons testé une nouvelle technique d’instrumentation des sites de création d’objets qui consiste à adapter les modifications apportées par l’instrumentation au bytecode de chaque méthode. Nous avons aussi testé l’impact de la résolution des sites d’appels sur la performance générale du profileur.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La compréhension des objets dans les programmes orientés objet est une tâche impor- tante à la compréhension du code. JavaScript (JS) est un langage orienté-objet dyna- mique, et son dynamisme rend la compréhension du code source très difficile. Dans ce mémoire, nous nous intéressons à l’analyse des objets pour les programmes JS. Notre approche construit de façon automatique un graphe d’objets inspiré du diagramme de classes d’UML à partir d’une exécution concrète d’un programme JS. Le graphe résul- tant montre la structure des objets ainsi que les interactions entre eux. Notre approche utilise une transformation du code source afin de produire cette in- formation au cours de l’exécution. Cette transformation permet de recueillir de l’infor- mation complète au sujet des objets crées ainsi que d’intercepter toutes les modifications de ces objets. À partir de cette information, nous appliquons plusieurs abstractions qui visent à produire une représentation des objets plus compacte et intuitive. Cette approche est implémentée dans l’outil JSTI. Afin d’évaluer l’utilité de l’approche, nous avons mesuré sa performance ainsi que le degré de réduction dû aux abstractions. Nous avons utilisé les dix programmes de réfé- rence de V8 pour cette comparaison. Les résultats montrent que JSTI est assez efficace pour être utilisé en pratique, avec un ralentissement moyen de 14x. De plus, pour 9 des 10 programmes, les graphes sont suffisamment compacts pour être visualisés. Nous avons aussi validé l’approche de façon qualitative en inspectant manuellement les graphes gé- nérés. Ces graphes correspondent généralement très bien au résultat attendu. Mots clés: Analyse de programmes, analyse dynamique, JavaScript, profilage.