744 resultados para Intangible assets. Dynamic capabilities. Performance of tourist destinations
Resumo:
Heating, ventilation, air conditioning and refrigeration (HVAC&R) systems account for more than 60% of the energy consumption of buildings in the UK. However, the effect of the variety of HVAC&R systems on building energy performance has not yet been taken into account within the existing building energy benchmarks. In addition, the existing building energy benchmarks are not able to assist decision-makers with HVAC&R system selection. This study attempts to overcome these two deficiencies through the performance characterisation of 36 HVAC&R systems based on the simultaneous dynamic simulation of a building and a variety of HVAC&R systems using TRNSYS software. To characterise the performance of HVAC&R systems, four criteria are considered; energy consumption, CO2 emissions, thermal comfort and indoor air quality. The results of the simulations show that, all the studied systems are able to provide an acceptable level of indoor air quality and thermal comfort. However, the energy consumption and amount of CO2 emissions vary. One of the significant outcomes of this study reveals that combined heating, cooling and power systems (CCHP) have the highest energy consumption with the lowest energy related CO2 emissions among the studied HVAC&R systems.
Resumo:
Thermochromic windows are able to modulate their transmittance in both the visible and the near-infrared field as a function of their temperature. As a consequence, they allow to control the solar gains in summer, thus reducing the energy needs for space cooling. However, they may also yield a reduction in the daylight availability, which results in the energy consumption for indoor artificial lighting being increased. This paper investigates, by means of dynamic simulations, the application of thermochromic windows to an existing office building in terms of energy savings on an annual basis, while also focusing on the effects in terms of daylighting and thermal comfort. In particular, due attention is paid to daylight availability, described through illuminance maps and by the calculation of the daylight factor, which in several countries is subject thresholds. The study considers both a commercially available thermochromic pane and a series of theoretical thermochromic glazing. The expected performance is compared to static clear and reflective insulating glass units. The simulations are repeated in different climatic conditions, showing that the overall energy savings compared to clear glazing can range from around 5% for cold climates to around 20% in warm climates, while not compromising daylight availability. Moreover the role played by the transition temperature of the pane is examined, pointing out an optimal transition temperatures that is irrespective of the climatic conditions.
Resumo:
We synthesize the literature on Chinese multinational enterprises (MNEs) and find that much of the prior research is based on as few as a dozen case studies of Chinese firms. They are so case-specific that it has led to a misplaced call for new theories to explain Chinese firms’ internationalization. In an attempt to better relate theory with empirical evidence, we examine the largest 500 Chinese manufacturing firms. We aim to find out the number of Chinese manufacturing firms to be true MNEs by definition, and to examine their financial performance relative to global peers using the financial benchmarking method. We develop our theoretical perspectives from new internalization theory. We find that there are only 49 Chinese manufacturing firms to be true MNEs, whereas the rest is purely domestic firms. Their performance is poor relative to global peers. Chinese MNEs have home country bound firm-specific advantages (FSAs), which are built upon home country-specific advantages (home CSAs). They have not yet developed advanced management capabilities through recombination with host CSAs. Essentially, they acquire foreign firms to increase their sales in domestic market, but they fail to be competitive internationally and to achieve superior performance in overseas operations. Our findings have important strategic implications for managers, public policy makers, and academic research.
Resumo:
Cool materials are characterized by having a high solar reflectance r – which is able to reduce heat gains during daytime - and a high thermal emissivity ε that enables them to dissipate the heat absorbed throughout the day during night. Despite the concept of cool roofs - i.e. the application of cool materials to roof surfaces - is well known in US since 1990s, many studies focused on their performance in both residential and commercial sectors under various climatic conditions for US countries, while only a few case studies are analyzed in EU countries. The present work aims at analyzing the thermal benefits due to their application to existing office buildings located in EU countries. Indeed, due to their weight in the existing buildings stock, as well as the very low rate of new buildings construction, the retrofit of office buildings is a topic of great concern worldwide. After an in-depth characterization of the existing buildings stock in the EU, the book gives an insight into roof energy balance due to different technological solutions, showing in which cases and to what extent cool roofs are preferable. A detailed description of the physical properties of cool materials and their availability on the market provides a solid background for the parametric analysis carried out by means of detailed numerical models that aims at evaluating cool roofs performance for various climates and office buildings configurations. With the help of dynamic simulations, the thermal behavior of representative office buildings of the existing EU buildings stock is assessed in terms of thermal comfort and energy needs for air conditioning. The results, which consider several variations of building features that may affect the resulting energy balance, show how cool roofs are an effective strategy for reducing overheating occurrences and thus improving thermal comfort in any climate. On the other hand, potential heating penalties due to a reduction in the incoming heat fluxes through the roof are taken into account, as well as the aging process of cool materials. Finally, an economic analysis of the best performing models shows the boundaries for their economic convenience.
Resumo:
Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.
Resumo:
Over the past few decades, the phenomenon of competitiveness, and the underlying competitive advantage thereof, has been analyzed in diverse ways, in terms of its sources (external and internal environment) and competitive management strategies, as well as different scopes (nations, economic sectors and organizations) and fields of study (economy and organizational theory). Moreover, competitiveness is a complex phenomenon and, thus, reflected in the many methods and approaches which some frameworks have developed throughout the period to try to examine it in the tourism sector as well as other industries. In this research a framework for destinations is presented on the basis of dynamic capabilities. This is an important contribution to the research, since previous studies for the tourism sector have not approached this relevant aspect of the competitive development of tourist destinations, i.e. based on the capabilities of innovation, transformation, creation and research presents a competitive evaluation of the dynamic capabilities of tourist destinations, on the basis of 79 activities, distributed in eight , this work presents an empirical application of the framework in a sample of twenty Brazilian cities, considered, by the Brazilian Tourism Minister, as indicative tourist destinations for regional development. The results obtained from this evaluation were submitted to tests of statistical reliability and have demonstrated that the destinations possess heterogeneous levels of capabilities between themselves (different levels developed in the categories between the cities) and inside each destination (developed levels of capability between the categories of the destination). In other words: heterogeneity is not only between category to category.
Resumo:
Este trabalho analisa o desenvolvimento de dynamic capabilities em um contexto de turbulência institucional, diferente das condições em que esta perspectiva teórica costuma ser estudada. É feito um estudo de caso histórico e processual que analisa o surgimento das Dynamic Capabilities nos bancos brasileiros, a partir do desenvolvimento da tecnologia bancária que se deu entre os anos 1960 e 1990. Baseando-se nas proposições da Estratégia que analisam as vantagens competitivas das empresas através de seus recursos, conhecimentos e Dynamic Capabilities, é construído um framework com o qual são analisados diversos depoimentos dados ao livro “Tecnologia bancária no Brasil: uma história de conquistas, uma visão de futuro” (FONSECA; MEIRELLES; DINIZ, 2010) e em entrevistas feitas para este trabalho. Os depoimentos mostram que os bancos fizeram fortes investimentos em tecnologia a partir da reforma financeira de 1964, época em que se iniciou uma sequência de períodos com características próprias do ponto de vista institucional. Conforme as condições mudavam a cada período, os bancos também mudavam seu processo de informatização. No início, os projetos eram executados ad hoc, sob o comando direto dos líderes dos bancos. Com o tempo, à medida que a tecnologia evoluía, a infraestrutura tecnológica crescia e surgiam turbulências institucionais, os bancos progressivamente desenvolveram parcerias entre si e com fornecedores locais, descentralizaram a área de tecnologia, tornaram-se mais flexíveis, fortaleceram a governança corporativa e adotaram uma série de rotinas para cuidar da informática, o que levou ao desenvolvimento gradual das microfundações das Dynamic Capabilties nesses períodos. Em meados dos anos 1990 ocorreram a estabilização institucional e a abertura da economia à concorrência estrangeira, e assim o país colocou-se nas condições que a perspectiva teórica adotada considera ideais para que as Dynamic Capabilities sejam fontes de vantagem competitiva. Os bancos brasileiros mostraram-se preparados para enfrentar essa nova fase, o que é uma evidência de que eles haviam desenvolvido Dynamic Capabilities nas décadas precedentes, sendo que parte desse desenvolvimento podia ser atribuído às turbulências institucionais que eles haviam enfrentado.
Resumo:
We study the effects of a conditional transfers program on school enrollment and performance in Mexico. We provide a theoretical framework for analyzing the dynamic educational decision and process inc1uding the endogeneity and uncertainty of performance (passing grades) and the effect of a conditional cash transfer program for children enrolled at school. Careful identification of the program impact on this model is studied. This framework is used to study the Mexican social program Progresa in which a randomized experiment has been implemented and allows us to identify the effect of the conditional cash transfer program on enrollment and performance at school. Using the mIes of the conditional program, we can explain the different incentive effects provided. We also derive the formal identifying assumptions needed to provide consistent estimates of the average treatment effects on enrollment and performance at school. We estimate empirically these effects and find that Progresa had always a positive impact on school continuation whereas for performance it had a positive impact at primary school but a negative one at secondary school, a possible consequence of disincentives due to the program termination after the third year of secondary school.
Resumo:
As empresas estatais são freqüentemente consideradas como componentes cruciais da economia de um país. Eles são responsáveis pela criação de vários postos de trabalho e proveem serviços essenciais que exigem um grande investimento de capital. Porém, em países com instituições fracas, onde a responsabilidade dos políticos é limitada e a gestão dos recursos financeiros das empresas estatais sofre pouco controle, os funcionários são muitas vezes tentados pela corrupção. Enormes quantidades de fundos públicos são facilmente desviados, e dinheiro que deveria ter sido investido nas despesas de capital, no pagamento de dívida da empresa ou no aumento do retorno para os acionistas, é usado para aumentar a riqueza privada de indivíduos ou para financiar ilegalmente partidos políticos. O desempenho da empresa sofre com essas alienações visto que parte dos lucros da empresa não são reinvestidos na empresa e dado que incentivos dos gestores estão desalinhados com os interesses dos acionistas. Petrobras, a maior empresa da América Latina em termos de ativos e receitas anuais, sofreu em 2014 e 2015 um escândalo de corrupção imenso, cujo impacto económico foi considerável, levando ao enfraquecimento da confiança de muitos investidores no Brasil após o evento. O escândalo expôs um extenso esquema de corrupção através do qual os contratantes foram conspirando para aumentar os preços de contratos de construção, com a aprovação da administração da Petrobras que pediu em troca ganhos pessoais ou fundos para o Partido dos Trabalhadores (PT). A exposição do escândalo na imprensa brasileira teve um grande impacto sobre a credibilidade da Petrobras: as contas da empresa estavam escondendo imensas irregularidades dado que a empresa tinha pago demais para os contratos de construção que não foram precificados no valor do mercado. Ao longo deste estudo, usamos o exemplo da Petrobras para ilustrar como a corrupção dentro empresas estatais prejudica o desempenho da empresa e como ela afeta as várias partes interessadas da empresa.
Resumo:
The research trend for harvesting energy from the ambient vibration sources has moved from using a linear resonant generator to a non-linear generator in order to improve on the performance of a linear generator; for example, the relatively small bandwidth, intolerance to mistune and the suitability of the device for low-frequency applications. This article presents experimental results to illustrate the dynamic behaviour of a dual-mode non-linear energy-harvesting device operating in hardening and bi-stable modes under harmonic excitation. The device is able to change from one mode to another by altering the negative magnetic stiffness by adjusting the separation gap between the magnets and the iron core. Results for the device operating in both modes are presented. They show that there is a larger bandwidth for the device operating in the hardening mode compared to the equivalent linear device. However, the maximum power transfer theory is less applicable for the hardening mode due to occurrence of the maximum power at different frequencies, which depends on the non-linearity and the damping in the system. The results for the bi-stable mode show that the device is insensitive to a range of excitation frequencies depending upon the input level, damping and non-linearity.
Resumo:
The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Extensive field testes were conducted using the UCD single wheel tester employing three large radial ply tractor tires in two different soils, four different soil conditions, two axle load levels, and three levels of tire inflation pressures in order to quantify the benefits of using low/correct inflation pressures. During these tests slip, net traction, gross traction, and dynamic axle load were recorded. Furthermore, soil moisture content, cone index, and dry bulk density data were obtained at test locations. The results of the analysis showed a significant increase in net traction and traction efficiency when low/correct inflation was used. Benefits of using low/correct pressure was higher in tilled soil conditions.
Resumo:
The timed-initiation paradigm developed by Ghez and colleagues (1997) has revealed two modes of motor planning: continuous and discrete. Continuous responding occurs when targets are separated by less than 60° of spatial angle, and discrete responding occurs when targets are separated by greater than 60°. Although these two modes are thought to reflect the operation of separable strategic planning systems, a new theory of movement preparation, the Dynamic Field Theory, suggests that two modes emerge flexibly from the same system. Experiment 1 replicated continuous and discrete performance using a task modified to allow for a critical test of the single system view. In Experiment 2, participants were allowed to correct their movements following movement initiation (the standard task does not allow corrections). Results showed continuous planning performance at large and small target separations. These results are consistent with the proposal that the two modes reflect the time-dependent “preshaping” of a single planning system.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.