850 resultados para Constant Market Share
Resumo:
Hollywood has dominated the global film business since the First World War. Economic formulas used by governments to assess levels of industry dominance typically measure market share to establish the degree of industry concentration. The business literature reveals that a marketing orientation strongly correlates with superior market performance and that market leaders that possess a set of six superior marketing capabilities are able to continually outperform rival firms. This paper argues that the historical evidence shows that the Hollywood Majors have consistently outperformed rival firms and rival film industries in each of those six marketing capabilities and that unless rivals develop a similarly integrated and cohesive strategic marketing management approach to the movie business and match the Major studios’ superior capabilities, then Hollywood’s dominance will continue. This paper also proposes that in cyberspace, whilst the Internet does provide a channel that democratises film distribution, the flat landscape of the world wide web means that in order to stand out from the clutter of millions of cyber-voices seeking attention, independent film companies need to possess superior strategic marketing management capabilities and develop effective e-marketing strategies to find a niche, attract a loyal online audience and prosper. However, mirroring a recent CIA report forecasting a multi-polar world economy, this paper also argues that potentially serious longer-term rivals are emerging and will increasingly take a larger slice of an expanding global box office as India, China and other major developing economies and their respective cultural channels grow and achieve economic parity with or surpass the advanced western economies. Thus, in terms of global market share over time, Hollywood’s slice of the pie will comparatively diminish in an emerging multi-polar movie business.
Resumo:
Academic libraries have been acquiring ebooks for their collections for a number of years, but the uptake by some users has been curtailed by the limitations of screen reading on a traditional PC or laptop. Ebook readers promise to take us into a new phase of ebook development. Innovations include: wireless connectivity, electronic paper, increased battery life, and customisable displays. The recent rapid take-up of ebook readers in the United States, particularly Amazon’s Kindle, suggests that they may about to gain mass-market acceptance. A bewildering number of ebook readers are being promoted by companies eager to gain market share. In addition, each month seems to bring a new ebook reader or a new model of an existing device. Library administrators are faced with the challenge of separating the hype from the reality and deciding when the time is right to invest in and support these new technologies. The Library at QUT, in particular the QUT Library Ebook Reference Group (ERG) has been closely following developments in ebooks and ebook reader technologies. During mid 2010 QUT Library undertook a trial of a range of ebook readers available to Australian consumers. Each of the ebook readers acquired was evaluated by members of the QUT Library ERG and two student focus groups. Major criteria for evaluation included usability, functionality, accessibility and compatibility with QUT Library’s existing ebook collection. The two student focus groups evaluated the ebook readers mostly according to the criteria of usability and functionality. This paper will discuss these evaluations and outline a recommendation about which type (or types) of ebook readers could be acquired and made available for lending to students.
Resumo:
Earlier research developed theoretically-based aggregate metrics for technology strategy and used them to analyze California bridge construction firms (Hampson, 1993). Determinants of firm performance, including trend in contract awards, market share and contract awards per employee, were used as indicators for competitive performance. The results of this research were a series of refined theoretically-based measures for technology strategy and a demonstrated positive relationship between technology strategy and competitive performance within the bridge construction sector. This research showed that three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit— show very strong correlation with the competitive performance indicators of absolute growth in contract awards, and contract awards per employee. Both researchers and industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance in particular industry sectors. This paper builds on the previous research findings by evaluating the strategic fit of firms' approach to technology with industry segment characteristics. It begins with a brief overview of the background regarding technology strategy. The major sections of the paper describe niches and firms in an example infrastructure construction market, analyze appropriate technology strategies, and describe managerial actions to implement these strategies and support the business objectives of the firm.
Resumo:
The effective implementation of such an ISO 9001 Quality Management System (QMS) in construction companies requires a proper and full implementation of the system to allow companies to improve the way they operate, by this means increasing profitability and market share, producing innovative and sustainable construction products, or improving employee and customer satisfaction. In light of this, this paper discusses the current status of QMS implementation, particularly related to the twenty elements of ISO 9001 within the grade 7 (G-7) category of Indonesian construction companies. A survey was conducted involving 403 respondents from 77 companies, to solicit an evaluation of the current implementation levels of the ISO 9001 elements. The survey findings indicated that for a large percentage of the sector surveyed they had ‘not so fully implemented’ the elements. Scrutiny of the data had also indicated elements that are ‘minimally implemented’, whilst none of the elements fell in the category of ‘fully implemented’. Based on these findings, it is suggested that the G-7 contractors may need to fully commit to practicing control of customer-supplied product and statistical techniques in order to enhance an effective implementation of ISO 9001 elements for ensuring better quality performance. These two elements are recognized as the least implemented of the quality elements.
Resumo:
Real estate, or property development, is considered one of the pillar industries of the Chinese economy. As a result of the opening up of the economy as well as the "macro-control" policy of the Central Chinese Government to moderate the frenetic pace of growth of the economy, the real estate industry has faced fierce competition and ongoing change. Real estate firms in China must improve their competitiveness in order to maintain market share or even survive in this brutally competitive environment. This study developed a methodology to evaluate the competitiveness of real estate developers in the China and then used a case study to illustrate the effectiveness of the evaluation method. Four steps were taken to achieve this. The first step was to conduct a thorough literature review which included a review of the characteristics of real estate industry, theories about competitiveness and the competitive characteristics of real estate developers. Following this literature review, the competitive model was developed based on seven key competitive factors (the 'level 1') identified in the literature. They include: (1) financial competency; (2) market share; (3) management competency; (4) social responsibility; (5) organisational competency; (6) technological capabilities; and, (7) regional competitiveness. In the next step of research, the competitive evaluation criteria (the 'level 2') under each of competitive factors (the 'level 1') were evaluated. Additionally, there were identified a set of competitive attributes (the 'level 3') under each competitive criteria (the 'level 2'). These attributes were initially recognised during the literature review and then expanded upon through interviews with multidisciplinary experts and practitioners in various real estate-related industries. The final step in this research was to undertake a case study using the proposed evaluation method and attributes. Through the study of an actual real estate development company, the procedures and effectiveness of the evaluation method were illustrated and validated. Through the above steps, this research investigates and develops an analytical system for determining the corporate competitiveness of real estate developers in China. The analytical system is formulated to evaluate the "state of health" of the business from different competitive perspectives. The result of empirical study illustrates that a systematic and structured evaluation can effectively assist developers in identifying their strengths and highlighting potential problems. This is very important for the development of an overall corporate strategy and supporting key strategic decisions. This study also provides some insights, analysis and suggestions for improving the competitiveness of real estate developers in China from different perspectives, including: management competency, organisational competency, technological capabilities, financial competency, market share, social responsibility and regional competitiveness. In the case study, problems were found in each of these areas, and they appear to be common in the industry. To address these problems and improve the competitiveness and effectiveness of Chinese real estate developers, a variety of suggestions are proposed. The findings of this research provide an insight into the factors that influence competitiveness in the Chinese real estate industry while also assisting practitioners to formulate strategies to improve their competitiveness. References for studying the competitiveness of real estate developers in other countries are also provided.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
The reliability of urban passenger trains is a critical performance measure for passenger satisfaction and ultimately market share. A delay to one train in a peak period can have a severe effect on the schedule adherence of other trains. This paper presents an analytically based model to quantify the expected positive delay for individual passenger trains and track links in an urban rail network. The model specifically addresses direct delay to trains, knock-on delays to other trains, and delays at scheduled connections. A solution to the resultant system of equations is found using an iterative refinement algorithm. Model validation, which is carried out using a real-life suburban train network consisting of 157 trains, shows the model estimates to be on average within 8% of those obtained from a large scale simulation. Also discussed, is the application of the model to assess the consequences of increased scheduled slack time as well as investment strategies designed to reduce delay.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways, e.g. for payment systems or assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners privacy. Since signature-based approaches mainly detect known malwares, anomaly-based approaches can be a valuable addition to these systems. They base on mathematical algorithms processing data that describe the state of a certain device. For gaining this data, a monitoring client is needed that has to extract usable information (features) from the monitored system. Our approach follows a dual system for analyzing these features. On the one hand, functionality for on-device light-weight detection is provided. But since most algorithms are resource exhaustive, remote feature analysis is provided on the other hand. Having this dual system enables event-based detection that can react to the current detection need. In our ongoing research we aim to investigates the feasibility of light-weight on-device detection for certain occasions. On other occasions, whenever significant changes are detected on the device, the system can trigger remote detection with heavy-weight algorithms for better detection results. In the absence of the server respectively as a supplementary approach, we also consider a collaborative scenario. Here, mobile devices sharing a common objective are enabled by a collaboration module to share information, such as intrusion detection data and results. This is based on an ad-hoc network mode that can be provided by a WiFi or Bluetooth adapter nearly every smartphone possesses.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
The export market for Australian wine continues to grow at a rapid rate, with imported wines also playing a role in market share in sales in Australia. It is estimated that over 60 per cent of all Australian wine is exported, while 12 per cent of wine consumed in Australia has overseas origins. In addition to understanding the size and direction (import or export) of wines, the foreign locales also play an important role in any tax considerations. While the export market for Australian produced alcohol continues to grow, it is into the Asian market that the most significant inroads are occurring. Sales into China of bottled wine over $7.50 per litre recently overtook the volume sold our traditional partners of the United States and Canada. It is becoming easier for even small to medium sized businesses to export their services or products overseas. However, it is vital for those businesses to understand the tax rules applying to any international transactions. Specifically, one of the first tax regimes that importers and exporters need to understand once they decide to establish a presence overseas is transfer pricing. These are the rules that govern the cross-border prices of goods, services and other transactions entered into between related parties. This paper is Part 2 of the seminar presented on transfer pricing and international tax issues which are particularly relevant to the wine industry. The predominant focus of Part 2 is to discuss four key areas likely to affect international expansion. First, the use of the available transfer pricing methodologies for international related party transactions is discussed. Second, the affects that double tax agreements will have on taking a business offshore are considered. Third, the risks associated with aggressive tax planning through tax information exchange agreements is reviewed. Finally, the paper predicts future ‘trip-wires’ and areas to ‘watch out for’ for practitioners dealing with clients operating in the international arena.
Resumo:
Purpose The aim was to determine the extent of daily disposable contact lens prescribing worldwide and to characterise the associated demographics and fitting patterns. Methods Up to 1,000 survey forms were sent to contact lens fitters in up to 40 countries between January and March every year for five consecutive years (2007 to 2011). Practitioners were asked to record data relating to the first 10 contact lens fits or refits performed after receiving the survey form. Survey data collected since 1996 were also analysed for seven nations to assess daily disposable lens fitting trends since that time. Results Data were collected in relation to 97,289 soft lens fits, of which 23,445 (24.1 per cent) were with daily disposable lenses and 73,170 (75.9 per cent) were with reusable lenses. Daily disposable lens prescribing ranged from 0.6 per cent of all soft lenses in Nepal to 66.2 per cent in Qatar. Compared with reusable lens fittings, daily disposable lens fittings can be characterised as follows: older age (30.0 ± 12.5 versus 29.3 ± 12.3 years for reusable lenses); males are over-represented; a greater proportion of new fits versus refits; 85.9 per cent hydrogel; lower proportion of toric and presbyopia designs and a higher proportion of part-time wear. There has been a continuous increase in daily disposable lens prescribing between 1996 and 2011. The proportion of daily disposable lens fits (as a function of all soft lens fits) is positively related to the gross domestic product at purchasing power parity per capita (r2 = 0.55, F = 46.8, p < 0.0001). Conclusions The greater convenience and other benefits of daily disposable lenses have resulted in this modality capturing significant market share. The contact lens field appears to be heading toward a true single-use-only, disposable lens market.
Resumo:
The international tax system, designed a century ago, has not kept pace with the modern multinational entity rendering it ineffective in taxing many modern businesses according to economic activity. One of those modern multinational entities is the multinational financial institution (MNFI). The recent global financial crisis provides a particularly relevant and significant example of the failure of the current system on a global scale. The modern MNFI is increasingly undertaking more globalised and complex trading operations. A primary reason for the globalisation of financial institutions is that they typically ‘follow-the-customer’ into jurisdictions where international capital and international investors are required. The International Monetary Fund (IMF) recently reported that from 1995-2009, foreign bank presence in developing countries grew by 122 per cent. The same study indicates that foreign banks have a 20 per cent market share in OECD countries and 50 per cent in emerging markets and developing countries. Hence, most significant is that fact that MNFIs are increasingly undertaking an intermediary role in developing economies where they are financing core business activities such as mining and tourism. IMF analysis also suggests that in the future, foreign bank expansion will be greatest in emerging economies. The difficulties for developing countries in applying current international tax rules, especially the current traditional transfer pricing regime, are particularly acute in relation to MNFIs, which are the biggest users of tax havens and offshore finance. This paper investigates whether a unitary taxation approach which reflects economic reality would more easily and effectively ensure that the profits of MNFIs are taxed in the jurisdictions which give rise to those profits. It has previously been argued that the uniqueness of MNFIs results in a failure of the current system to accurately allocate profits and that unitary tax as an alternative could provide a sounder allocation model for international tax purposes. This paper goes a step further, and examines the practicalities of the implementation of unitary taxation for MNFIs in terms of the key components of such a regime, along with their their implications. This paper adopts a two-step approach in considering the implications of unitary taxation as a means of improved corporate tax coordination which requires international acceptance and agreement. First, the definitional issues of the unitary MNFI are examined and second, an appropriate allocation formula for this sector is investigated. To achieve this, the paper asks first, how the financial sector should be defined for the purposes of unitary taxation and what should constitute a unitary business for that sector and second, what is the ‘best practice’ model of an allocation formula for the purposes of the apportionment of the profits of the unitary business of a financial institution.
Resumo:
The Paper, by consideration of the issue of authorisation, addresses a very practical development in commerce. Online copyright infringement is now not only about unauthorised uses of cinematograph films but has filtered down to become more prevalent amongst small to medium enterprises (SME), as some competitors embrace online trading by aggressively and often unlawfully, seeking market share. It is understandable that internet service providers (ISPs), as gatekeepers of internet traffic, may be considered as being more than a conduit of contravening conduct but not a joint tortfeasor involved in a common design. In between those extremes lies the concept of authorisation in copyright which has a long history in Australia since the Copyright Act 1905 (Cth). The text of s 101(1A) of the Copyright Act, in particular s 101(1A)(a) and (c), derived from statements of Gibbs J in Moorhouse.
Resumo:
Missoni is a luxury Italian knitwear brand that partnered with Target in September 2011 releasing a large, one off, mass-market collection that ranged from apparel to home wares. The collaboration received extensive media coverage and was consequently extremely sought after. The online sales site crashed within hours of opening while shelves were cleared in stores minutes after trading began. Within hours more than 40000 items from the collection were posted for sale online at greatly inflated prices. Evaluation of the case study revealed that sales of the Missoni collection increased following the collaboration and the value of the publicity generated at estimated US$100 million. The lack of available stock, despite the enormous hype created, reinforced Missoni’s luxury image. Missoni was able to gain massive awareness of the brand despite not employing any of its own communication channels in the promotion of the collaboration. However the co-branded collaboration was distinctively Missoni, potentially inciting comparison and confusion with the signature line. Nevertheless, this study shows that co-branding strategies can offer a viable opportunity for luxury brands to increase their market share, while they maintain their market position.