818 resultados para Gavarnie, Cirque de (Hautes-Pyrénées)
Resumo:
Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.
Resumo:
Nous assistons actuellement à une diffusion, à l'échelle planétaire, des Technologies de l'Information et de la Communication (TIC) même si elle se fait à des rythmes différents selon les nations (voire entre les régions d'un même pays) créant ainsi un fossé dit « numérique », en sus des multiples inégalités déjà présentes. Cette révolution informatique et technologique engendre de nombreux changements dans les rapports sociaux et permet de nombreuses applications destinées à simplifier la vie quotidienne de tout un chacun. Amine Bekkouche se penche sur la problématique de la cyberadministration comme conséquence importante des TIC, à l'instar du commerce électronique. Il présente, d'abord, une synthèse des principaux concepts de la cyberadministration ainsi qu'un panorama de la situation mondiale en ce domaine. Par la suite, il appréhende la cyberadministration dans la perspective des pays émergents, notamment, à travers l'illustration d'un pays en développement représentatif. Il propose alors des solutions concrètes qui prennent comme point de départ le secteur éducatif pour permettre une « alphabétisation informatique » de la société afin de contribuer justement à réduire le fossé numérique. Il élargit, ensuite, ces propositions à d'autres domaines et formule des recommandations facilitant leur mise en oeuvre. Il conclut, enfin, sur des perspectives qui pourraient constituer autant de pistes de recherches futures et permettre l'élaboration de projets de développement, à travers l'appropriation de ces TIC, pour améliorer la condition de l'administré, et plus globalement, du citoyen. - We are currently witnessing a distribution of Information and Communication Technologies (ICT) on a global scale. Yet, this distribution is carried out in different rhythms within each nation (and even among regions in a given country), which creates a "digital" gap, in addition to multiple inequalities already present. This computing and technological revolution engenders many changes in social relationships and permits numerous applications that are destined to simplify our lives. Amine Bekkouche takes a closer look at the issue of e-government as an important consequence of ICTs, following the example of electronic commerce. First, he presents a synthesis of the main concepts in e- government as well as a panoramic view of the global situation in this domain. Subsequently, he studies e-government in view of emerging countries, in particular through the illustration of a country in representative development. Then, he offers concrete solutions, which take the education sector as their starting point, to allow for a "computed digitalisation" of society that contribute to reduce the digital gap. Thereafter, he broadens these proposals to other domains and formulates recommendations that help their implementation. Finally, he concludes with perspectives that may constitute further research tracks and enable the elaboration of development projects, through the appropriation of ICTs, in order to improve the condition of the administered, and more generally, that of the citizen.
Resumo:
Structure of the Thesis This thesis consists of 5 sections. Section 1 starts with the problem definition and the presentation of the objectives of this thesis. Section 2 introduces a presentation of the theoretical foundations of Venture financing and a review of the main theories developed on Venture investing. It includes a taxonomy of contracting clauses relevant in venture contracting, the conflicts they address, and presents some general observations on contractual clauses. Section 3 presents the research findings on the analysis of a European VC's deal flow and investment screening linked to the prevailing market conditions. Section 4 focuses an empirical study of a European VC's investment process, the criteria it uses to make its investments. It presents empirical findings on the investment criteria over time, business cycles, and investment types. It also links these criteria to the VC's subsequent performance. Finally, section 5 presents an empirical research on the comparison of the legal contracts signed between European and United States Venture Capitalists and the companies they finance. This research highlights some of the contracting practices in Europe and the United States.
Resumo:
Abstract In this thesis we present the design of a systematic integrated computer-based approach for detecting potential disruptions from an industry perspective. Following the design science paradigm, we iteratively develop several multi-actor multi-criteria artifacts dedicated to environment scanning. The contributions of this thesis are both theoretical and practical. We demonstrate the successful use of multi-criteria decision-making methods for technology foresight. Furthermore, we illustrate the design of our artifacts using build and-evaluate loops supported with a field study of the Swiss mobile payment industry. To increase the relevance of this study, we systematically interview key Swiss experts for each design iteration. As a result, our research provides a realistic picture of the current situation in the Swiss mobile payment market and reveals previously undiscovered weak signals for future trends. Finally, we suggest a generic design process for environment scanning.
Resumo:
There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
Resumo:
In my thesis I present the findings of a multiple-case study on the CSR approach of three multinational companies, applying Basu and Palazzo's (2008) CSR-character as a process model of sensemaking, Suchman's (1995) framework on legitimation strategies, and Habermas (1996) concept of deliberative democracy. The theoretical framework is based on the assumption of a postnational constellation (Habermas, 2001) which sends multinational companies onto a process of sensemaking (Weick, 1995) with regards to their responsibilities in a globalizing world. The major reason is that mainstream CSR-concepts are based on the assumption of a liberal market economy embedded in a nation state that do not fit the changing conditions for legitimation of corporate behavior in a globalizing world. For the purpose of this study, I primarily looked at two research questions: (i) How can the CSR approach of a multinational corporation be systematized empirically? (ii) What is the impact of the changing conditions in the postnational constellation on the CSR approach of the studied multinational corporations? For the analysis, I adopted a holistic approach (Patton, 1980), combining elements of a deductive and inductive theory building methodology (Eisenhardt, 1989b; Eisenhardt & Graebner, 2007; Glaser & Strauss, 1967; Van de Ven, 1992) and rigorous qualitative data analysis. Primary data was collected through 90 semi-structured interviews in two rounds with executives and managers in three multinational companies and their respective stakeholders. Raw data originating from interview tapes, field notes, and contact sheets was processed, stored, and managed using the software program QSR NVIVO 7. In the analysis, I applied qualitative methods to strengthen the interpretative part as well as quantitative methods to identify dominating dimensions and patterns. I found three different coping behaviors that provide insights into the corporate mindset. The results suggest that multinational corporations increasingly turn towards relational approaches of CSR to achieve moral legitimacy in formalized dialogical exchanges with their stakeholders since legitimacy can no longer be derived only from a national framework. I also looked at the degree to which they have reacted to the postnational constellation by the assumption of former state duties and the underlying reasoning. The findings indicate that CSR approaches become increasingly comprehensive through integrating political strategies that reflect the growing (self-) perception of multinational companies as political actors. Based on the results, I developed a model which relates the different dimensions of corporate responsibility to the discussion on deliberative democracy, global governance and social innovation to provide guidance for multinational companies in a postnational world. With my thesis, I contribute to management research by (i) delivering a comprehensive critique of the mainstream CSR-literature and (ii) filling the gap of thorough qualitative research on CSR in a globalizing world using the CSR-character as an empirical device, and (iii) to organizational studies by further advancing a deliberative view of the firm proposed by Scherer and Palazzo (2008).
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
RESUME Cette thèse se situe à la frontière de la recherche en économie du développement et du commerce international et vise à intégrer les apports de l'économie géographique. Le premier chapitre s'intéresse aux effets de création et de détournement de commerce au sein des accords régionaux entre pays en développement et combine une approche gravitaire et une estimation non paramétrique des effets de commerce. Cette analyse confirme un effet de commerce non monotone pour six accords régionaux couvrant l'Afrique, l'Amérique Latine et l'Asie (AFTA, CAN, CACM, CEDEAO, MERCO SUR et SADC) sur la période 1960-1996. Les accords signés dans les années 90 (AFTA, CAN, MERCOSUR et SADC) semblent avoir induis une amélioration du bien-être de leurs membres mais avec un impact variable sur le reste du monde, tandis que les accords plus anciens (CEDEAO et CACM) semblent montrer que les effets de commerce et de bien-être se réduisent pour finir par s'annuler à mesure que le nombre d'années de participation des Etats membres augmente. Le deuxième chapitre pose la question de l'impact de la géographie sur les échanges Sud-Sud. Ce chapitre innove par rapport aux méthodes classiques d'estimation en dérivant une équation de commerce à partir de l'hypothèse d'Armington et en intégrant une fonction de coût de transport qui prend en compte la spécificité des pays de l'UEMOA. Les estimations donnent des effets convaincants quant au rôle de l'enclavement et des infrastructures: deux pays enclavés de l'UEMOA commercent 92% moins que deux autres pays quelconques, tandis que traverser un pays de transit au sein de l'espace UEMOA augmente de 6% les coûts de transport, et que bitumer toutes les routes inter-Etat de l'Union induirait trois fois plus de commerce intra-UEMOA. Le chapitre 3 s'intéresse à la persistance des différences de développement au sein des accords régionaux entre pays en développement. Il montre que la géographie différenciée des pays du Sud membres d'un accord induit un impact asymétrique de celui-ci sur ses membres. Il s'agit d'un modèle stylisé de trois pays dont deux ayant conclu un accord régional. Les résultats obtenus par simulation montrent qu'une meilleure dotation en infrastructure d'un membre de l'accord régional lui permet d'attirer une plus grande part industrielle à mesure que les coûts de transport au sein de l'accord régional sont baissés, ce qui conduit à un développement inégal entre les membres. Si les niveaux d'infrastructure domestique de transport sont harmonisés au sein des pays membres de l'accord d'intégration, leurs parts industrielles peuvent converger au détriment des pays restés hors de l'union. Le chapitre 4 s'intéresse à des questions d'économie urbaine en étudiant comment l'interaction entre rendements croissants et coûts de transport détermine la localisation des activités et des travailleurs au sein d'un pays ou d'une région. Le modèle développé reproduit un fait stylisé observé à l'intérieur des centres métropolitains des USA: sur une période longue (1850-1990), on observe une spécialisation croissante des centres urbains et de leurs périphéries associée à une évolution croissante puis décroissante de la population des centres urbains par rapport à leurs périphéries. Ce résultat peut se transférer dans un contexte en développement avec une zone centrale et une zone périphérique: à mesure que l'accessibilité des régions s'améliore, ces régions se spécialiseront et la région principale, d'abord plus importante (en termes de nombre de travailleurs) va finir par se réduire à une taille identique à celle de la région périphérique.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
Highly diverse radiolarian faunas of latest Maastrichtian to early Eocene age have been recovered from the low latitude realm in order to contribute to the clarification of radiolarian taxonomy, construct a zonation based on a discrete sequence of co-existence intervals of species ranging from the late Paleocene to early Eocene and to describe a rich low latitude latest Cretaceous to late Paleocene fauna. 225 samples of late Paleocene to early Eocene age have been collected from ODP Leg 171 B-Hole 1051 A (Blake Nose), DSDP Leg 43-Site 384 (Northwest Atlantic) and DSDP Leg 10-Sites 86, 94, 95, 96. Sequences consist of mainly pelagic oozes and chalks, with some clay and ash layers. A new imaging technique is devised to perform (in particular on topotypic material) both transmitted light microscopy and SEM imaging on individual radiolarian specimens. SEM precedes transmitted light imaging. Radiolarians are adhered to a cover slip (using nail varnish) which is secured to a stub using conductive levers. Specimens are then photographed in low vacuum (40-50Pa; 0.5mbar), which enables charge neutralization by ionized molecules of the chamber atmosphere. Thus gold coating is avoided and subsequently this allows transmitted light imaging to follow. The conductive levers are unscrewed and the cover slip is simply overturned and mounted with Canada balsam. In an attempt towards a post-Haeckelian classification, the initial spicule (Entactinaria), micro- or macrosphere (Spumellaria) and initial spicule and cephalis (Nassellaria) have been studied by slicing Entactinaria and Spumellaria, and by tilting Nassellaria in the SEM chamber. A new genus of the family Coccodiscidae is erected and Spongatractus HAECKEL is re-located to the subfamily Axopruinae. The biochronology has been carried out using the Unitary Association Method (Guex 1977, 1991). A database recording the occurrences of 112 species has been used to establish a succession of 22 Unitary Associations. Each association is correlated to chronostratigraphy via calcareous microfossils that were previously studied by other authors. The 22 UAs have been united into seven Unitary Associations Zones (UAZones) (JP10- JE4). The established zones permit to distinguish supplementary subdivisions within the existing zonation. The low-latitude Paleocene radiolarian zonation established by Sanfilippo and Nigrini (1998a) is incomplete due to the lack of radiolarian-bearing early Paleocene sediments. In order to contribute to the study of sparsely known low latitude early Paleocene faunas, 80 samples were taken from the highly siliceous Guayaquil Formation (Ecuador). The sequence consists of black cherts, shales, siliceous limestones and volcanic ash layers. The carbonate content increases up section. Age control is supplied by sporadic occurrences of silicified planktonic foraminifera casts. One Cretaceous zone and seven Paleocene zones have been identified. The existing zonation for the South Pacific can be applied to the early-early late Paleocene sequence, although certain marker species have significantly shorter ranges (notably Buryella foremanae and B. granulata). Despite missing marker species in the late Paleocene, faunal distribution correlates reasonably to the Low-Latitude zonation. An assemblage highly abundant in Lithomelissa, Lophophaena and Cycladophora in the upper RP6 zone (correlated by the presence of Pterocodon poculum, Circodiscus circularis, Pterocodon? sp. aff. P. tenellus and Stylotrochus nitidus) shows a close affinity to contemporaneous faunas reported from Site 1121, Campbell Plateau. Coupled with a high diatom abundance (notably Aulacodiscus spp. and Arachnoidiscus spp.), these faunas are interpreted as reflecting a period of enhanced biosiliceous productivity during the late Paleocene. The youngest sample is void of radiolarians, diatoms and sponge spicules yet contains many pyritized infaunal benthic foraminifera which are akin to the midway-type fauna. The presence of this fauna suggests deposition in a neritic environment. This is in contrast to the inferred bathyal slope depositional environment of the older Paleocene sediments and suggests a shoaling of the depositional environment which may be related to a coeval major accretionary event. RESUME DE LA THESE Des faunes de radiolaires de basses latitudes très diversifiées d'âge Maastrichtien terminal à Eocène inférieur, ont été étudiées afin de contribuer à la clarification de leur taxonomie, de construire une biozonation basée sur une séquence discrète d'intervalles de coexistence des espèces d'age Paléocène supérieur à Eocène inférieur et de décrire une riche faune de basse latitude allant du Crétacé terminal au Paléocène supérieur. L'étude de cette faune contribue particulièrement à la connaissance des insaisissables radiolaires de basses latitudes du Paléocène inférieur. 225 échantillons d'âge Paléocène supérieur à Eocène inférieur provenant des ODP Leg 171B-Site 1051A (Blake Nose), Leg DSDP 43-Site 384 (Atlantique Nord -Ouest) et des DSDP Leg 10 -Sites 86, 94, 95, 96, ont été étudiés. Ces séquences sont constituées principalement de « ooze » et de « chalks »pélagiques ainsi que de quelques niveaux de cendres et d'argiles. Une nouvelle technique d'imagerie a été conçue afin de pouvoir prendre conjointement des images en lumière transmise et au Microscope Electronique à Balayage (MEB) de spécimens individuels. Ceci à été particulièrement appliqué à l'étude des topotypes. L'imagerie MEB précède l'imagerie en lumière transmise. Les radiolaires sont collés sur une lame pour micropaléontologie (au moyen de vernis à ongles) qui est ensuite fixée à un porte-objet à l'aide de bras métalliques conducteurs. Les spécimens sont ensuite photographiés en vide partiel (40-50Pa; 0.5mbar), ce qui permet la neutralisation des charges électrostatiques dues à la présence de molécules ionisées dans l'atmosphère de la chambre d'observation. Ainsi la métallisation de l'échantillon avec de l'or n'est plus nécessaire et ceci permet l'observation ultérieure en lumière transmise. Les bras conducteurs sont ensuite dévissés et la lame est simplement retournée et immergée dans du baume du Canada. Dans une approche de classification post Haeckelienne, le spicule initial (Entactinaires), la micro- ou macro -sphère (Spumellaires) et le spicule initial et cephalis (Nassellaires) ont été étudiés. Ceci a nécessité le sectionnement d'Entactinaires et de Spumellaires, et de pivoter les Nassellaires dans la chambre d'observation du MEB. Un nouveau genre de la Famille des Coccodiscidae a été érigé et Spongatractus HAECKEL à été réassigné à la sous-famille des Axopruninae. L'analyse biostratigraphique à été effectuée à l'aide de la méthode des Associations Unitaires {Guex 1977, 1991). Une base de données enregistrant les présences de 112 espèces à été utilisée poux établir une succession de 22 Associations Unitaires. Chaque association est corrélée à la chronostratigraphie au moyen de microfossiles calcaires précédemment étudiés par d'autres auteurs. Les 22 UAs ont été combinées en sept Zones d'Associations Unitaires (UAZones) (JP10- JE4). Ces Zones permettent d'insérer des subdivisions supplémentaires dans la zonation actuelle. La zonation de basses latitudes du Paléocène établie par Sanfilippo et Nigrini (1998a) est incomplète due au manque de sédiments du Paléocène inférieur contenant des radiolaires. Afin de contribuer à l'étude des faunes peu connues des basses latitudes du Paléocène inférieur, 80 échantillons ont été prélevés d'une section siliceuse de la Formation de Guayaquil (Equateur). La séquence est composée de cherts noirs, de shales, de calcaires siliceux et de couches de cendres volcaniques. La fraction carbonatée augmente vers le haut de la section. Des contraintes chronologiques sont fournies par la présence sporadique de moules de foraminifères planctoniques. Une zone d'intervalles du Crétacé et sept du Paléocène ont été mises en évidence. Bien que certaines espèces marqueur ont des distributions remarquablement plus courtes (notamment Buryella foremanae et B. granulata), la zonation existante pour le Pacifique Sud est applicable à la séquence d'age Paléocène inférieure à Paléocène supérieur basal étudiée. Malgré l'absence d'espèces marqueur du Paléocène supérieur, la succession faunistique se corrèle raisonnablement avec la zonation pour les basses latitudes. Un assemblage contenant d'abondants représentant du genre Lithomelissa, Lophophaena et Cycladophora dans la zone RP6 (correlée par la présence de Pterocodon poculum, Circodiscus circularis, Pterocodon? sp. aff. P. tenellus et Stylotrochus nitidus) montre une grande similitude avec certaines faunes issues des hauts latitudes et d'age semblable décrites par Hollis (2002, Site 1121, Campbell Plateau). Ceci, en plus d'une abondance importante en diatomés (notamment Aulacodiscus spp. et Arachnoidiscus spp.) nous mènent à interpréter cette faune comme témoin d'un épisode de productivité biosiliceuse accrue dans le Paléocène supérieur. L'échantillon le plus jeune, dépourvu de radiolaires, de diatomés et de spicules d'éponge contient de nombreux foraminifères benthiques infaunaux pyritisés. Les espèces identifiées sont caractéristiques d'une faune de type midway. La présence de ces foraminifères suggère un environnement de type néritique. Ceci est en contraste avec l'environnement de pente bathyale caractérisent les sédiments sous-jacent. Cette séquence de diminution de la tranche d'eau peut être associée à un événement d'accrétion majeure. RESUME DE LA THESE (POUR LE GRAND PUBLIC) Les radiolaires constituent le groupe de plancton marin le plus divers et le plus largement répandu de l'enregistrement fossile. Un taux d'évolution rapide et une variation géographique considérable des populations font des radiolaires un outil de recherche sans égal pour la biostratigraphie et la paléocéanographie. Néanmoins, avant de pouvoir les utiliser comme outils de travail, il est essentiel d'établir une solide base taxonomique. L'étude des Radiolaires peut impliquer plusieurs techniques d'extraction, d'observation et d'imagerie qui sont dépendantes du degré d'altération diagénétique des spécimens. Le squelette initial, qu'il s'agisse d'un spicule initial (Entactinaria), d'une micro- ou macro -sphère (Spumellaria) ou d'un spicule initial et d'un cephalis (Nassellaria), est l'élément le plus constant au cours de l'évolution et devrait représenter le fondement de la systématique. Des échantillons provenant de carottes de basses latitudes du Deep Sea Drilling Project et de l' Ocean Drilling ont été étudiés. De nouvelles techniques d'imagerie et de sectionnement ont été développées sur des topotypes de radiolaires préservés en opale, dans le but d'étudier les caractéristiques de leur squelette initial qui n'étaient pas visibles dans leur illustration originale. Ceci aide entre autre à comparer des spécimens recristallisés en quartz, provenant de terrains accrétés, avec les holotypes en opale de la littérature. La distribution des espèces étudiés a fourni des données biostratigraphiques qui ont été compilées à l'aide de la méthode des Associations Unitaires (Guez 1977, 1991). Il s'agit d'un modèle mathématique déterministe conçu pour exploiter la totalité de l'assemblage plutôt que de se confiner à l'utilisation de taxons marqueurs individuels. Une séquence de 22 Associations Unitaires a été établie pour la période allant du Paléocène supérieur à l'Éocène inférieur. Chaque Association Unitaire a été corrélée à l'échelle de temps absolue à l'aide de microfossiles calcaires. Les 22 UAs ont été combinées en sept Zones d'Associations Unitaires (JP10- JE4). Ces Zones permettent d'insérer des subdivisions supplémentaires dans la zonation actuelle. Les radiolaires du Paléocène inférieur à moyen des basses latitudes sont rares. Les meilleures sections connues se trouvent dans les hautes latitudes (Nouvelle Zélande). Quelques assemblages épars ont été mentionnés par le passé en Californie, en Équateur et en Russie. Une séquence siliceuse de 190 mètres dans la Formation de Guayaquil (Équateur), s'étendant du Maastrichtien supérieur au Paléocène supérieur, a fourni des faunes relativement bien préservées. L'étude de ces faunes a permis de mettre en évidence la première séquence complète de radiolaires de basses latitudes dans le Paléocène inférieure. Huit zones allant du Crétacé terminal au Paléocène supérieur ont pu être appliqués et la présence de foraminifères planctoniques a fournie plusieurs points d'attache chronologiques. Dans le Paléocène supérieur, un riche assemblage contenant d'abondants diatomés et radiolaires ayant des similitudes faunistiques marquantes avec des assemblages de hautes latitudes de Nouvelle Zélande, témoigne d'un épisode de productivité biosiliceuse accrue pendant cette période. Étant donné que la pointe du continent sud-américain et l'Antarctique étaient plus proches au cours du Paléocène, ce phénomène peut être expliqué par le transport, le long de la côte ouest de l'Amérique du Sud, d'eaux riches en nutriments en provenance de l'Océan Antarctique. Suite à cet épisode, l'enregistrement en radiolaires est interrompu. Ceci peut être associé à des événements tectoniques régionaux qui ont eu pour effet de diminuer la tranche d'eau relative, rendant l'environnement plus favorable aux foraminifères benthiques qui sont abondamment présents dans l'échantillon le plus jeune de la séquence.