744 resultados para Intangible assets. Dynamic capabilities. Performance of tourist destinations
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
Social Enterprises (SEs) are normally micro and small businesses that trade to tackle social problems, and to improve communities, people’s life chances, and the environment. Thus, their importance to society and economies is increasing. However, there is still a need for more understanding of how these organisations operate, perform, innovate and scale-up. This knowledge is crucial to design and provide accurate strategies to enhance the sector and increase its impact and coverage. Obtaining this understanding is the main driver of this paper, which follows the theoretical lens of the Knowledge-based View (KBV) theory to develop and assess empirically a novel model for knowledge management capabilities (KMCs) development that improves performance of SEs. The empirical assessment consisted of a quantitative study with 432 owners and senior members of SEs in UK, underpinned by 21 interviews. The findings demonstrate how particular organisational characteristics of SEs, the external conditions in which they operate, and informal knowledge management activities, have created overall improvements in their performance of up to 20%, based on a year-to-year comparison, including innovation and creation of social and environmental value. These findings elucidate new perspectives that can contribute not only to SEs and SE supporters, but also to other firms.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Resumo:
This research explores the business model (BM) evolution process of entrepreneurial companies and investigates the relationship between BM evolution and firm performance. Recently, it has been increasingly recognised that the innovative design (and re-design) of BMs is crucial to the performance of entrepreneurial firms, as BM can be associated with superior value creation and competitive advantage. However, there has been limited theoretical and empirical evidence in relation to the micro-mechanisms behind the BM evolution process and the entrepreneurial outcomes of BM evolution. This research seeks to fill this gap by opening up the ‘black box’ of the BM evolution process, exploring the micro-patterns that facilitate the continuous shaping, changing, and renewing of BMs and examining how BM evolutions create and capture value in a dynamic manner. Drawing together the BM and strategic entrepreneurship literature, this research seeks to understand: (1) how and why companies introduce BM innovations and imitations; (2) how BM innovations and imitations interplay as patterns in the BM evolution process; and (3) how BM evolution patterns affect firm performances. This research adopts a longitudinal multiple case study design that focuses on the emerging phenomenon of BM evolution. Twelve entrepreneurial firms in the Chinese Online Group Buying (OGB) industry were selected for their continuous and intensive developments of BMs and their varying success rates in this highly competitive market. Two rounds of data collection were carried out between 2013 and 2014, which generates 31 interviews with founders/co-founders and in total 5,034 pages of data. Following a three-stage research framework, the data analysis begins by mapping the BM evolution process of the twelve companies and classifying the changes in the BMs into innovations and imitations. The second stage focuses down to the BM level, which addresses the BM evolution as a dynamic process by exploring how BM innovations and imitations unfold and interplay over time. The final stage focuses on the firm level, providing theoretical explanations as to the effects of BM evolution patterns on firm performance. This research provides new insights into the nature of BM evolution by elaborating on the missing link between BM dynamics and firm performance. The findings identify four patterns of BM evolution that have different effects on a firm’s short- and long-term performance. This research contributes to the BM literature by presenting what the BM evolution process actually looks like. Moreover, it takes a step towards the process theory of the interplay between BM innovations and imitations, which addresses the role of companies’ actions, and more importantly, reactions to the competitors. Insights are also given into how entrepreneurial companies achieve and sustain value creation and capture by successfully combining the BM evolution patterns. Finally, the findings on BM evolution contributes to the strategic entrepreneurship literature by increasing the understanding of how companies compete in a more dynamic and complex environment. It reveals that, the achievement of superior firm performance is more than a simple question of whether to innovate or imitate, but rather an integration of innovation and imitation strategies over time. This study concludes with a discussion of the findings and their implications for theory and practice.
Resumo:
Cold in-place recycling (CIR) and cold central plant recycling (CCPR) of asphalt concrete (AC) and/or full-depth reclamation (FDR) of AC and aggregate base are faster and less costly rehabilitation alternatives to conventional reconstruction for structurally distressed pavements. This study examines 26 different rehabilitation projects across the USA and Canada. Field cores from these projects were tested for dynamic modulus and repeated load permanent deformation. These structural characteristics are compared to reference values for hot mix asphalt (HMA). A rutting sensitivity analysis was performed on two rehabilitation scenarios with recycled and conventional HMA structural overlays in different climatic conditions using the Mechanistic Empirical Pavement Design (MEPDG). The cold-recycled scenarios exhibited performance similar to that of HMA overlays for most cases. The exceptions were the cases with thin HMA wearing courses and/or very poor cold-recycled material quality. The overall conclusion is that properly designed CIR/FDR/CCPR cold-recycled materials are a viable alternative to virgin HMA materials.
Resumo:
Many firms from emerging markets flocked to developed countries at high cost with hopes of acquiring strategic assets that are difficult to obtain in home countries. Adequate research has focused on the motivations and strategies of emerging country firms' (ECFs') internationalization, while limited studies have explored their survival in advanced economies years after their venturing abroad. Due to the imprinting effect of home country institutions that inhibit their development outside their home market, ECFs are inclined to hire executives with international background and affiliate to world-wide organizations for the purpose of linking up with the global market, embracing multiple perspectives for strategic decisions, and absorbing the knowledge of foreign markets. However, the effects of such orientation on survival are under limited exploration. Motivated by the discussion above, I explore ECFs’ survival and stock performance in a developed country (U.S.). Applying population ecology, signaling theory and institutional theory, the dissertation investigates the characteristics of ECFs that survived in the developed country (U.S.), tests the impacts of global orientation on their survival, and examines how global-oriented activities (i.e. joining United Nations Global Compact) affect their stock performance. The dissertation is structured in the form of three empirical essays. The first essay explores and compares different characteristics of ECFs and developed country firms (DCFs) that managed to survive in the U.S. The second essay proposes the concept of global orientation, and tests its influences on ECFs’ survival. Employing signaling theory and institutional theory, the third essay investigates stock market reactions to announcements of United Nation Global Compact (UNGC) participation. The dissertation serves to explore the survival of ECFs in the developed country (U.S.) by comparison with DCFs, enriching traditional theories by testing non-traditional arguments in the context of ECFs’ foreign operation, and better informing practitioners operating ECFs about ways of surviving in developed countries and improving stockholders’ confidence in their future growth.
Resumo:
Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506
Resumo:
DUNE is a next-generation long-baseline neutrino oscillation experiment. It aims to measure the still unknown $ \delta_{CP} $ violation phase and the sign of $ \Delta m_{13}^2 $, which defines the neutrino mass ordering. DUNE will exploit a Far Detector composed of four multi-kiloton LArTPCs, and a Near Detector (ND) complex located close to the neutrino source at Fermilab. The SAND detector at the ND complex is designed to perform on-axis beam monitoring, constrain uncertainties in the oscillation analysis and perform precision neutrino physics measurements. SAND includes a 0.6 T super-conductive magnet, an electromagnetic calorimeter, a 1-ton liquid Argon detector - GRAIN - and a modular, low-density straw tube target tracker system. GRAIN is an innovative LAr detector where neutrino interactions can be reconstructed using only the LAr scintillation light imaged by an optical system based on Coded Aperture masks and lenses - a novel approach never used before in particle physics applications. In this thesis, a first evaluation of GRAIN track reconstruction and calorimetric capabilities was obtained with an optical system based on Coded Aperture cameras. A simulation of $\nu_\mu + Ar$ interactions with the energy spectrum expected at the future Fermilab Long Baseline Neutrino Facility (LBNF) was performed. The performance of SAND was evaluated, combining the information provided by all its sub-detectors, on the selection of $ \nu_\mu + Ar \to \mu^- + p + X $ sample and on the neutrino energy reconstruction.
Resumo:
Numerous types of acute respiratory failure are routinely treated using non-invasive ventilatory support (NIV). Its efficacy is well documented: NIV lowers intubation and death rates in various respiratory disorders. It can be delivered by means of face masks or head helmets. Currently the scientific community’s interest about NIV helmets is mostly focused on optimising the mixing between CO2 and clean air and on improving patient comfort. To this end, fluid dynamic analysis plays a particularly important role and a two- pronged approach is frequently employed. While on one hand numerical simulations provide information about the entire flow field and different geometries, they exhibit require huge temporal and computational resources. Experiments on the other hand help to validate simulations and provide results with a much smaller time investment and thus remain at the core of research in fluid dynamics. The aim of this thesis work was to develop a flow bench and to utilise it for the analysis of NIV helmets. A flow test bench and an instrumented mannequin were successfully designed, produced and put into use. Experiments were performed to characterise the helmet interface in terms of pressure drop and flow rate drop over different inlet flow rates and outlet pressure set points. Velocity measurements by means of Particle Image Velocimetry were performed. Pressure drop and flow rate characteristics from experiments were contrasted with CFD data and sufficient agreement was observed between both numerical and experimental results. PIV studies permitted qualitative and quantitative comparisons with numerical simulation data and offered a clear picture of the internal flow behaviour, aiding the identification of coherent flow features.
Resumo:
The aim of this study was to evaluate the performance of the Centers for Dental Specialties (CDS) in the country and associations with sociodemographic indicators of the municipalities, structural variables of services and primary health care organization in the years 2004-2009. The study used secondary data from procedures performed in the CDS to the specialties of periodontics, endodontics, surgery and primary care. Bivariate analysis by χ2 test was used to test the association between the dependent variable (performance of the CDS) with the independents. Then, Poisson regression analysis was performed. With regard to the overall achievement of targets, it was observed that the majority of CDS (69.25%) performance was considered poor/regular. The independent factors associated with poor/regular performance of CDS were: municipalities belonging to the Northeast, South and Southeast regions, with lower Human Development Index (HDI), lower population density, and reduced time to deployment. HDI and population density are important for the performance of the CDS in Brazil. Similarly, the peculiarities related to less populated areas as well as regional location and time of service implementation CDS should be taken into account in the planning of these services.
Resumo:
cDNA arrays are a powerful tool for discovering gene expression patterns. Nylon arrays have the advantage that they can be re-used several times. A key issue in high throughput gene expression analysis is sensitivity. In the case of nylon arrays, signal detection can be affected by the plastic bags used to keep membranes humid. In this study, we evaluated the effect of five types of plastics on the radioactive transmittance, number of genes with a signal above the background, and data variability. A polyethylene plastic bag 69 μm thick had a strong shielding effect that blocked 68.7% of the radioactive signal. The shielding effect on transmittance decreased the number of detected genes and increased the data variability. Other plastics which were thinner gave better results. Although plastics made from polyvinylidene chloride, polyvinyl chloride (both 13 μm thick) and polyethylene (29 and 7 μm thick) showed different levels of transmittance, they all gave similarly good performances. Polyvinylidene chloride and polyethylene 29 mm thick were the plastics of choice because of their easy handling. For other types of plastics, it is advisable to run a simple check on their performance in order to obtain the maximum information from nylon cDNA arrays.