989 resultados para Empirical processes
Resumo:
The demand for more efficient manufacturing processes has been increasing in the last few years. The cold forging process is presented as a possible solution, because it allows the production of parts with a good surface finish and with good mechanical properties. Nevertheless, the cold forming sequence design is very empirical and it is based on the designer experience. The computational modeling of each forming process stage by the finite element method can make the sequence design faster and more efficient, decreasing the use of conventional "trial and error" methods. In this study, the application of a commercial general finite element software - ANSYS - has been applied to model a forming operation. Models have been developed to simulate the ring compression test and to simulate a basic forming operation (upsetting) that is applied in most of the cold forging parts sequences. The simulated upsetting operation is one stage of the automotive starter parts manufacturing process. Experiments have been done to obtain the stress-strain material curve, the material flow during the simulated stage, and the required forming force. These experiments provided results used as numerical model input data and as validation of model results. The comparison between experiments and numerical results confirms the developed methodology potential on die filling prediction.
Resumo:
Global challenges, complexity and continuous uncertainty demand development of leadership approaches, employees and multi-organisation constellations. Current leadership theories do not sufficiently address the needs of complex business environments. First of all, before successful leadership models can be applied in practice, leadership needs to shift from the industrial age to the knowledge era. Many leadership models still view leadership solely through the perspective of linear process thinking. In addition, there is not enough knowledge or experience in applying these newer models in practice. Leadership theories continue to be based on the assumption that leaders possess or have access to all the relevant knowledge and capabilities to decide future directions without external advice. In many companies, however, the workforce consists of skilled professionals whose work and related interfaces are so challenging that the leaders cannot grasp all the linked viewpoints and cross-impacts alone. One of the main objectives of this study is to understand how to support participants in organisations and their stakeholders to, through practice-based innovation processes, confront various environments. Another aim is to find effective ways of recognising and reacting to diverse contexts, so companies and other stakeholders are better able to link to knowledge flows and shared value creation processes in advancing joint value to their customers. The main research question of this dissertation is, then, to seek understanding of how to enhance leadership in complex environments. The dissertation can, on the whole, be characterised as a qualitative multiple-case study. The research questions and objectives were investigated through six studies published in international scientific journals. The main methods applied were interviews, action research and a survey. The empirical focus was on Finnish companies, and the research questions were examined in various organisations at the top levels (leaders and managers) and bottom levels (employees) in the context of collaboration between organisations and cooperation between case companies and their client organisations. However, the emphasis of the analysis is the internal and external aspects of organisations, which are conducted in practice-based innovation processes. The results of this study suggest that the Cynefin framework, complexity leadership theory and transformational leadership represent theoretical models applicable to developing leadership through practice-based innovation. In and of themselves, they all support confronting contemporary challenges, but an implementable method for organisations may be constructed by assimilating them into practice-based innovation processes. Recognition of diverse environments, their various contexts and roles in the activities and collaboration of organisations and their interest groups is ever-more important to achieving better interaction in which a strategic or formal status may be bypassed. In innovation processes, it is not necessarily the leader who is in possession of the essential knowledge; thus, it is the role of leadership to offer methods and arenas where different actors may generate advances. Enabling and supporting continuous interaction and integrated knowledge flows is of crucial importance, to achieve emergence of innovations in the activities of organisations and various forms of collaboration. The main contribution of this dissertation relates to applying these new conceptual models in practice. Empirical evidence on the relevance of different leadership roles in practice-based innovation processes in Finnish companies is another valuable contribution. Finally, the dissertation sheds light on the significance of combining complexity science with leadership and innovation theories in research.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.
Resumo:
The main purpose of the present doctoral thesis is to investigate subjective experiences and cognitive processes in four different types of altered states of consciousness: naturally occurring dreaming, cognitively induced hypnosis, pharmacologically induced sedation, and pathological psychosis. Both empirical and theoretical research is carried out, resulting in four empirical and four theoretical studies. The thesis begins with a review of the main concepts used in consciousness research, the most influential philosophical and neurobiological theories of subjective experience, the classification of altered states of consciousness, and the main empirical methods used to study consciousness alterations. Next, findings of the original studies are discussed, as follows. Phenomenal consciousness is found to be dissociable from responsiveness, as subjective experiences do occur in unresponsive states, including anaesthetic-induced sedation and natural sleep, as demonstrated by post-awakening subjective reports. Two new tools for the content analysis of subjective experiences and dreams are presented, focusing on the diversity, complexity and dynamics of phenomenal consciousness. In addition, a new experimental paradigm of serial awakenings from non-rapid eye movement sleep is introduced, which enables more rapid sampling of dream reports than has been available in previous studies. It is also suggested that lucid dreaming can be studied using transcranial brain stimulation techniques and systematic analysis of pre-lucid dreaming. For blind judges, dreams of psychotic patients appear to be indistinguishable from waking mentation reports collected from the same patients, which indicates a close resemblance of these states of mind. However, despite phenomenological similarities, dreaming should not be treated as a uniform research model of psychotic or intact consciousness. Contrary to this, there seems to be a multiplicity of routes of how different states of consciousness can be associated. For instance, seemingly identical time perception distortions in different alterations of consciousness may have diverse underlying causes for these distortions. It is also shown that altered states do not necessarily exhibit impaired cognitive processing compared to a baseline waking state of consciousness: a case study of time perception in a hypnotic virtuoso indicates a more consistent perceptual timing under hypnosis than in a waking state. The thesis ends with a brief discussion of the most promising new perspectives for the study of alterations of consciousness.
Resumo:
One of the greatest conundrums to the contemporary science is the relation between consciousness and brain activity, and one of the specifi c questions is how neural activity can generate vivid subjective experiences. Studies focusing on visual consciousness have become essential in solving the empirical questions of consciousness. Th e main aim of this thesis is to clarify the relation between visual consciousness and the neural and electrophysiological processes of the brain. By applying electroencephalography and functional magnetic resonance image-guided transcranial magnetic stimulation (TMS), we investigated the links between conscious perception and attention, the temporal evolution of visual consciousness during stimulus processing, the causal roles of primary visual cortex (V1), visual area 2 (V2) and lateral occipital cortex (LO) in the generation of visual consciousness and also the methodological issues concerning the accuracy of targeting TMS to V1. Th e results showed that the fi rst eff ects of visual consciousness on electrophysiological responses (about 140 ms aft er the stimulus-onset) appeared earlier than the eff ects of selective attention, and also in the unattended condition, suggesting that visual consciousness and selective attention are two independent phenomena which have distinct underlying neural mechanisms. In addition, while it is well known that V1 is necessary for visual awareness, the results of the present thesis suggest that also the abutting visual area V2 is a prerequisite for conscious perception. In our studies, the activation in V2 was necessary for the conscious perception of change in contrast for a shorter period of time than in the case of more detailed conscious perception. We also found that TMS in LO suppressed the conscious perception of object shape when TMS was delivered in two distinct time windows, the latter corresponding with the timing of the ERPs related to the conscious perception of coherent object shape. Th e result supports the view that LO is crucial in conscious perception of object coherency and is likely to be directly involved in the generation of visual consciousness. Furthermore, we found that visual sensations, or phosphenes, elicited by the TMS of V1 were brighter than identically induced phosphenes arising from V2. Th ese fi ndings demonstrate that V1 contributes more to the generation of the sensation of brightness than does V2. Th e results also suggest that top-down activation from V2 to V1 is probably associated with phosphene generation. The results of the methodological study imply that when a commonly used landmark (2 cm above the inion) is used in targeting TMS to V1, the TMS-induced electric fi eld is likely to be highest in dorsal V2. When V1 was targeted according to the individual retinotopic data, the electric fi eld was highest in V1 only in half of the participants. Th is result suggests that if the objective is to study the role of V1 with TMS methodology, at least functional maps of V1 and V2 should be applied with computational model of the TMS-induced electric fi eld in V1 and V2. Finally, the results of this thesis imply that diff erent features of attention contribute diff erently to visual consciousness, and thus, the theoretical model which is built up of the relationship between visual consciousness and attention should acknowledge these diff erences. Future studies should also explore the possibility that visual consciousness consists of several processing stages, each of which have their distinct underlying neural mechanisms.
Resumo:
Entrepreneurial marketing is newly established term and there is need for more specific studies in order to understand the concept fully. SMEs have entrepreneurial marketing elements more visible in their marketing and therefore provide more fruitful insights for this research. SMEs marketing has gained more recognition during the past years and in some cases innovative characteristics can be identified despite constraints such as lack of certain resources. The purpose of this research is to study entrepreneurial marketing characteristics and SME processes in order to wider understanding and gain more insights of entrepreneurial marketing. In addition, planning and implementation of entrepreneurial marketing processes is examined in order to gain full coverage of SMEs marketing activities. The research was conducted as a qualitative research and data gathering was based on semi-structured interview survey, which involved nine company interviews. Multiple case research was used to analyze data so that focus and clarity could be maintained in organized manner. Case companies were chosen from different business fields so that more variation and insights could be identified. The empirical results suggest that two examined processes networking and word-of-mouth communication are very important processes for case companies which supports the previous researches. However, the entrepreneurial marketing characteristics had variation some were more visible and recognizable than others. Examining more closely the processes companies did not fully understand that networking or word-of-mouth marketing could be used as efficiently as other conventional marketing methods.
Resumo:
The purpose of this thesis is to find out how customer co-creation activities are managed in Finnish high-tech SMEs by understanding managers’ views on relevant issues. According to theory, issues such as firm size, customer knowledge implementation, lead customers, the fuzzy front-end of product/service development as well as the reluctance to engage in customer co-creation are some of the field’s focal issues. The views of 145 Finnish SME managers on these issues were gathered as empirical evidence through an online questionnaire and analyzed with SPSS statistics software. The results show, firstly, that Finnish SME managers are aware of the issues associated with customer co-creation and are able to actively manage them. Additionally, managers performed well in regards to collaborating with lead customers and implemented customer knowledge evenly in various stages of their new product and service development processes. Intellectual property rights emerged as an obstacle deterring managers from engaging in co-creation. The results suggest that in practice managers would do well by looking for more opportunities to implement customer knowledge in the early and late stages of new product and service development, as well as by actively searching for lead customers.
Resumo:
The thesis aims to build a coherent view and understanding of the innovation process and organizational technology adoption in Finnish bio-economy companies with a focus on innovations of a disruptive nature. Disruptive innovations are exceptional hence in order to create generalizations and a unified view of the subject the perspective is also on less radical innovations. Other interests of the thesis are how ideas are discovered and generated and how the nature of the innovation and size of the company affect the technology adoption and innovation process. The data was collected by interviewing six small and six large Finnish bio-economy companies. The results suggest companies regardless of size consider innovation as a core asset in the competitive markets. Organizations want to be considered innovators and early adopters yet these qualities are limited by certain, mainly resource-based factors. In addition the industry, scalability and Finland’s geographical location when seeking funding provide certain challenges. The innovation process may be considered relatively similar whether the idea or technology stems from an internal or external source suggesting the technology adoption process can in fact be linked to the innovation process theories. Thus the thesis introduces a new theoretical model which based on the results of the study and the theories of technology adoption and innovation process aims on characterizing how ideas and technology from both external and internal sources generate into innovations. The innovation process is in large bio-economy companies most often similar to or a modified version of the stage-gate model, while small companies generally have less structured processes. Nevertheless the more disruptive the innovation, the less it fits in the structured processes. This implies disruptive innovation cannot be put in a certain mould but it is rather processed case-by-case.
Resumo:
Crystal properties, product quality and particle size are determined by the operating conditions in the crystallization process. Thus, in order to obtain desired end-products, the crystallization process should be effectively controlled based on reliable kinetic information, which can be provided by powerful analytical tools such as Raman spectrometry and thermal analysis. The present research work studied various crystallization processes such as reactive crystallization, precipitation with anti-solvent and evaporation crystallization. The goal of the work was to understand more comprehensively the fundamentals, phenomena and utilizations of crystallization, and establish proper methods to control particle size distribution, especially for three phase gas-liquid-solid crystallization systems. As a part of the solid-liquid equilibrium studies in this work, prediction of KCl solubility in a MgCl2-KCl-H2O system was studied theoretically. Additionally, a solubility prediction model by Pitzer thermodynamic model was investigated based on solubility measurements of potassium dihydrogen phosphate with the presence of non-electronic organic substances in aqueous solutions. The prediction model helps to extend literature data and offers an easy and economical way to choose solvent for anti-solvent precipitation. Using experimental and modern analytical methods, precipitation kinetics and mass transfer in reactive crystallization of magnesium carbonate hydrates with magnesium hydroxide slurry and CO2 gas were systematically investigated. The obtained results gave deeper insight into gas-liquid-solid interactions and the mechanisms of this heterogeneous crystallization process. The research approach developed can provide theoretical guidance and act as a useful reference to promote development of gas-liquid reactive crystallization. Gas-liquid mass transfer of absorption in the presence of solid particles in a stirred tank was investigated in order to gain understanding of how different-sized particles interact with gas bubbles. Based on obtained volumetric mass transfer coefficient values, it was found that the influence of the presence of small particles on gas-liquid mass transfer cannot be ignored since there are interactions between bubbles and particles. Raman spectrometry was successfully applied for liquid and solids analysis in semi-batch anti-solvent precipitation and evaporation crystallization. Real-time information such as supersaturation, formation of precipitates and identification of crystal polymorphs could be obtained by Raman spectrometry. The solubility prediction models, monitoring methods for precipitation and empirical model for absorption developed in this study together with the methodologies used gives valuable information for aspects of industrial crystallization. Furthermore, Raman analysis was seen to be a potential controlling method for various crystallization processes.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
The purpose of this thesis is to find out how outbound logistics process can be improved by reducing unnecessary waste in a globally dispersed make-to-order (MTO) supply chain. The research problem was addressed by a multinational corporation that aims to find a solution for reducing unnecessary waste in their outbound logistics process. The focus is on customized products that are delivered via sea transportation. Theoretical framework for improving outbound logistics processes in globally dispersed MTO supply chain was created based on business process management, Porter’s value chain theory, value stream mapping and current reality tree. The empirical research was conducted by using constructive approach due to its ability to research a practical problem and to improve the existing practices. The data was collected from ten semi-structured interviews and three non-participant observations. By analysing the data and applying the theoretical framework, five types of waste were detected in the process that were seen to derive from six root causes. Practical solution was constructed to reduce the waste in the process by combining the existing literature with the ideas raising from empirical data. The results of this thesis suggest that a MNC with a globally dispersed MTO supply chain can improve its outbound logistics process by applying activities that enhance internal and external integration, collaboration and coordination, and increase predictability of the process. This research has practical relevance both for the case company as well as for other MNCs with globally dispersed MTO supply chains that aim to improve their outbound logistics processes. This research contributes to the BPM and CRA research by providing an evidence for their applicability in the new context.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Cet article s'intéresse aux processus de clarification des rôles professionnels lors de l'intégration d'une infirmière praticienne spécialisée dans les équipes de première ligne au Québec.
Resumo:
Female genital pain is a prevalent condition that can disrupt the psychosexual and relational well-being of affected women and their romantic partners. Despite the intimate context in which the pain can be elicited (i.e., during sexual intercourse), interpersonal correlates of genital pain and sexuality have not been widely studied in comparison to other psychosocial factors. This review describes several prevailing theoretical models explaining the role of the partner in female genital pain: the operant learning model, cognitive-behavioral and communal coping models, and intimacy models. The review includes a discussion of empirical research on the interpersonal and partner correlates of female genital pain and the impact of genital pain on partners’ psychosexual adjustment. Together, this research highlights a potential reciprocal interaction between both partners’ experiences of female genital pain. The direction of future theoretical, methodological, and clinical research is discussed with regard to the potential to enhance understanding of the highly interpersonal context of female genital pain
Resumo:
In the past decades since Schumpeter’s influential writings economists have pursued research to examine the role of innovation in certain industries on firm as well as on industry level. Researchers describe innovations as the main trigger of industry dynamics, while policy makers argue that research and education are directly linked to economic growth and welfare. Thus, research and education are an important objective of public policy. Firms and public research are regarded as the main actors which are relevant for the creation of new knowledge. This knowledge is finally brought to the market through innovations. What is more, policy makers support innovations. Both actors, i.e. policy makers and researchers, agree that innovation plays a central role but researchers still neglect the role that public policy plays in the field of industrial dynamics. Therefore, the main objective of this work is to learn more about the interdependencies of innovation, policy and public research in industrial dynamics. The overarching research question of this dissertation asks whether it is possible to analyze patterns of industry evolution – from evolution to co-evolution – based on empirical studies of the role of innovation, policy and public research in industrial dynamics. This work starts with a hypothesis-based investigation of traditional approaches of industrial dynamics. Namely, the testing of a basic assumption of the core models of industrial dynamics and the analysis of the evolutionary patterns – though with an industry which is driven by public policy as example. Subsequently it moves to a more explorative approach, investigating co-evolutionary processes. The underlying questions of the research include the following: Do large firms have an advantage because of their size which is attributable to cost spreading? Do firms that plan to grow have more innovations? What role does public policy play for the evolutionary patterns of an industry? Are the same evolutionary patterns observable as those described in the ILC theories? And is it possible to observe regional co-evolutionary processes of science, innovation and industry evolution? Based on two different empirical contexts – namely the laser and the photovoltaic industry – this dissertation tries to answer these questions and combines an evolutionary approach with a co-evolutionary approach. The first chapter starts with an introduction of the topic and the fields this dissertation is based on. The second chapter provides a new test of the Cohen and Klepper (1996) model of cost spreading, which explains the relationship between innovation, firm size and R&D, at the example of the photovoltaic industry in Germany. First, it is analyzed whether the cost spreading mechanism serves as an explanation for size advantages in this industry. This is related to the assumption that the incentives to invest in R&D increase with the ex-ante output. Furthermore, it is investigated whether firms that plan to grow will have more innovative activities. The results indicate that cost spreading serves as an explanation for size advantages in this industry and, furthermore, growth plans lead to higher amount of innovative activities. What is more, the role public policy plays for industry evolution is not finally analyzed in the field of industrial dynamics. In the case of Germany, the introduction of demand inducing policy instruments stimulated market and industry growth. While this policy immediately accelerated market volume, the effect on industry evolution is more ambiguous. Thus, chapter three analyzes this relationship by considering a model of industry evolution, where demand-inducing policies will be discussed as a possible trigger of development. The findings suggest that these instruments can take the same effect as a technical advance to foster the growth of an industry and its shakeout. The fourth chapter explores the regional co-evolution of firm population size, private-sector patenting and public research in the empirical context of German laser research and manufacturing over more than 40 years from the emergence of the industry to the mid-2000s. The qualitative as well as quantitative evidence is suggestive of a co-evolutionary process of mutual interdependence rather than a unidirectional effect of public research on private-sector activities. Chapter five concludes with a summary, the contribution of this work as well as the implications and an outlook of further possible research.