502 resultados para Benchmarks
Resumo:
Introdução: Em Portugal, bem como nos restantes países mundiais, tem sido registado, em virtude de múltiplas transformações societárias, um aumento crescente do envelhecimento demográfico. Este novo cenário demográfico originou uma reflexão, por parte de organizações supranacionais, sobre as cidades na sua relação com os munícipes mais velhos. Desta reflexão surge o projeto Cidade Amiga das Pessoas Idosas que apresenta referenciais de avaliação das cidades para que estas possam adaptar as suas estruturas e serviços aos seus munícipes mais velhos. Beneficiando desta forma do potencial que as pessoas mais velhas representam para a humanidade. Objetivos: O presente estudo tem como objetivo central verificar se a cidade de Coimbra é uma cidade amiga das pessoas idosas. Metodologia: A pesquisa remete para um estudo qualitativo exploratório a partir dos procedimentos metodológicos que constam do Protocolo de Vancouver. O focus group decorreu em duas sessões. Participantes: Foram auscultados 16 pessoas, 15 (93,8%) do sexo feminino. A idade média situa-se nos 79,88 anos (dp= ± 10,658), são maioritariamente viúvos (7= 43,8 %) e 7 (43,8%) e têm como habilitações a 4ª classe. Autoclassificam-se maioritariamente na classe média baixa (7 =43,8). Resultados: Das oito categorias analisadas três categorias “espaços exteriores e edifícios”, “transportes” e “respeito e inclusão social” são avaliadas com aspetos positivos e negativos. O “suporte comunitário e serviços de saúde” é avaliado como positivo enquanto a “habitação”, “participação social” e “comunicação e informação” são avaliados como negativos. As sugestões efetuadas referem-se a um único tópico “espaços exteriores e edifícios”. Conclusões: Se partilharmos a tese que uma cidade amiga das pessoas idosas estimula o envelhecimento ativo porque otimiza as oportunidades de participação no ambiente urbano melhorando, desta forma, a qualidade de vida das pessoas envelhecem. Os resultados que obtivemos, a partir da auscultação de um grupo de idosos, permitem-nos afirmar que Coimbra precisa de se adaptar aos seus munícipes mais velhos. Só assim Coimbra se poderá tornar uma cidade amiga das pessoas idosas. Importa igualmente registar que os resultados encontrados devem ser mediados pelo perfil sociodemográfico dos idosos entrevistados. / Introduction: In Portugal, as well as in other countries worldwide, has been registered by virtue of multiple associated transformations, an increasing growing of population. This new demographic scenario triggered, led to a reflection on the part of supranational organizations, about the cities in their relationship with the older residents. This reflection comes with the project Friendly City of Older Persons that presents benchmarks for the evaluation of cities so that they can adapt their structures and services to its older citizens. Enjoying this way the potential that older people represent for humanity. Objectives: This study aims to check if the city of Coimbra is an elderly friendly city. Methodology: The research refers to an exploratory qualitative study from the methodological procedures of the Vancouver Protocol. The focus group was held in two sessions. Participants: 16 people were sounded out, 15(93.8%) were female. The average ages tends at79.88 years (dp = ±10,658), are mostly widowers (7=43.8%) and 7 (43.8%) have the qualifications to4th grade. They are classified mostly in the lower middle class(7=43.8). Results: Of the eight analyzed categories three categories" outdoor spaces and buildings", "transport" and "respect and social inclusion" are evaluated on positive and negative aspects. The "community support and health services" is evaluated as positive as the"housing", "social participation "and "communication and information" are evaluated as negative. The suggestions are related to a single topic "buildings and outdoor areas." Conclusions: If we share the view that an elderly friendly citizen courages active aging because it optimizes the opportunities for participation in the urban environment improving, in this manner, the quality of life of the elderly. The results we obtained from the consultation of a group of elderly allow us to say that Coimbra needs to adapt to its older citizens. Only then Coimbra can become a friendly city of the elderly. It should also be noted that the results should be mediated by socio-demographic profile of elderly respondents.
Resumo:
Several studies have been undertaken or attempted by industry and academe to address the need for lodging industry carbon benchmarking. However, these studies have focused on normalizing resource use with the goal of rating or comparing all properties based on multivariate regression according to an industry-wide set of variables, with the result that data sets for analysis were limited. This approach is backward, because practical hotel industry benchmarking must first be undertaken within a specific location and segment.1 Therefore, the CHSB study’s goal is to build a representative database providing raw benchmarks as a base for industry comparisons.2 These results are presented in the CHSB2016 Index, through which a user can obtain the range of benchmarks for energy consumption, water consumption, and greenhouse gas emissions for hotels within specific segments and geographic locations.
Resumo:
Notre recherche explore quelques moments forts des métamorphoses du rapport politique à la mortalité sous examen des thèmes de l’interdit, de la dignité, de l’autonomie et de l’altérité. Nous dégageons des ancrages propices à nourrir la pensée actuelle en médecine palliative. Ainsi, nous livrons une enquête philosophique, appréciant pour nous Occidentaux, les influences marquantes des pensées gréco-romaine, chrétienne et moderne. Ces bases, édifiant notre monde politique, ont suscité l’émergence de la médecine palliative. C’est pourquoi, nous tentons de caractériser et de comprendre les problématiques nouvelles, dans leurs aspects politique et éthique, envisagées à l’aune des formes contemporaines d’accompagnement des mourants. Notre effort tente de discerner les aspirations et les impasses. L’étude de la métamorphose des repères fait ressortir une dissociation accentuée au fil du temps. En effet, au fur et à mesure, notre entreprise d’interprétation du fondement de ces questions politiques dévoilait : une reconnaissance universelle de l’interdit d’homicide, mais accusant une perte du lien moral au profit d’une visée amorale ; un aval unanime du respect de la dignité, mais manifestant une confusion et une division ostensible entre conceptions intrinsèque et extrinsèque ; une affirmation péremptoire de l’autonomie, mais avec une distanciation marquée au regard de la façon d’envisager la part de l’autre ; une déclinaison de liens humains reconnus de tous, mais exacerbés dans une tension artificielle entre individualisme et altruisme. Au surplus, en constatant la distance et la dislocation entre le public et le privé, entre la réclamation de fraternité et la recherche d’amicalité signifiante, nous avons envisagé la communauté palliative comme un lieu de résistance à cette décomposition menaçante au sein de la communauté politique. À terme de l’analyse, nous avons fondé les concepts « d’allonomie » et de « suspension éthique ». Il s’agit de contributions originales destinées à donner à la philosophie toute sa dimension sapientielle au service de l’accompagnement palliatif.
Resumo:
This paper presents a three dimensional, thermos-mechanical modelling approach to the cooling and solidification phases associated with the shape casting of metals ei. Die, sand and investment casting. Novel vortex-based Finite Volume (FV) methods are described and employed with regard to the small strain, non-linear Computational Solid Mechanics (CSM) capabilities required to model shape casting. The CSM capabilities include the non-linear material phenomena of creep and thermo-elasto-visco-plasticity at high temperatures and thermo-elasto-visco-plasticity at low temperatures and also multi body deformable contact with which can occur between the metal casting of the mould. The vortex-based FV methods, which can be readily applied to unstructured meshes, are included within a comprehensive FV modelling framework, PHYSICA. The additional heat transfer, by conduction and convection, filling, porosity and solidification algorithms existing within PHYSICA for the complete modelling of all shape casting process employ cell-centred FV methods. The termo-mechanical coupling is performed in a staggered incremental fashion, which addresses the possible gap formation between the component and the mould, and is ultimately validated against a variety of shape casting benchmarks.
Resumo:
Na última década houve uma retomada de investimentos na construção naval brasileira o que resultou em uma expansão e modernização da capacidade produtiva dos estaleiros nacionais. Os estaleiros nacionais ainda precisam atingir um nível de excelência operacional compatível com o observado nos países líderes de mercado O trabalho apresenta tópicos adotados por estaleiros estrangeiros bem sucedidos e que podem ser implantadas pelos estaleiros brasileiros com o intuito de torná-los competitivos mundialmente. Para isso, realizou-se um estudo de caso em um estaleiro nacional, cujo foco foi o levantamento das tecnologias e processos em uso nesse estaleiro e a classificação com relação às melhores práticas mundiais (benchmarks). Um método de benchmarking desenvolvido para construção de navios foi utilizado no presente estudo. O trabalho pode servir como fonte de informações para realizar ajustes para melhorias em processos produtivos, redução de tempos de ciclo e melhor utilização da mão-de-obra. Dessa forma, pode contribuir para posicionar a situação atual do estaleiro e verificar as necessidades para torná-lo competitivo internacionalmente.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
This article reports a unique analysis of private engagements by an activist fund. It is based on data made available to us by Hermes, the fund manager owned by the British Telecom Pension Scheme, on engagements with management in companies targeted by its UK Focus Fund. In contrast with most previous studies of activism, we report that the fund executes shareholder activism predominantly through private interventions that would be unobservable in studies purely relying on public information. The fund substantially outperforms benchmarks and we estimate that abnormal returns are largely associated with engagements rather than stock picking. © The Author 2008.
Resumo:
Over a period of 50 years—between 1962 and 2012—three preeminent American piano competitions, the Van Cliburn International Piano Competition, the University of Maryland International Piano Competition/William Kapell International Piano Competition and the San Antonio International Piano Competition, commissioned for inclusion on their required performance lists 26 piano works, almost all by American composers. These compositions, works of sufficient artistic depth and technical sophistication to serve as rigorous benchmarks for competition finalists, constitute a unique segment of the contemporary American piano repertoire. Although a limited number of these pieces have found their way into the performance repertoire of concert artists, too many have not been performed since their premières in the final rounds of the competitions for which they were designed. Such should not be the case. Some of the composers in question are innovative titans of 20th-century American music—Samuel Barber, Aaron Copland, Leonard Bernstein, John Cage, John Corigliano, William Schuman, Joan Tower and Ned Rorem, to name just a few—and many of the pieces themselves, as historical touchstones, deserve careful examination. This study includes, in addition to an introductory overview of the three competitions, a survey of all 26 compositions and an analysis of their expressive characteristics, from the point of view of the performing pianist. Numerous musical examples support the analysis. Biographical information about the composers, along with descriptions of their overall musical styles, place these pieces in historical context. Analytical and technical comprehension of this distinctive and rarely performed corner of the modern classical piano world could be of inestimable value to professional pianists, piano pedagogues and music educators alike.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.
Resumo:
This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed timevarying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible realtime term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.
Resumo:
Mestrado em Controlo de Gestão e dos Negócios