849 resultados para High Technology Firms
Resumo:
This paper analyzes the profile of Spanish young innovative companies (YICs) and the determinants of innovation and imitation strategies. The results for an extensive sample of 2,221 Spanish firms studied during the period 2004–2010 show that YICs are found in all sectors, although they are more concentrated in high-tech sectors and, in particular, in knowledge-intensive services (KIS). Three of every four YICs are involved in KIS. Our results highlight that financial and knowledge barriers have much impact on the capacity of young, small firms to innovate and to become YICs, whereas market barriers are not obstacles to becoming a YIC. Public funding, in particular from the European Union, makes it easier for a new firm to become a YIC. In addition, YICs are more likely to innovate than mature firms, although they are more susceptible to sectoral and territorial factors. YICs make more dynamic use of innovation and imitation strategies when they operate in high-tech industries and are based in science parks located close to universities. Keywords: innovation strategies, public innovation policies, barriers to innovation, multinomial probit model. JEL Codes: D01, D22 , L60, L80, O31
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
The aim of this study was to investigate the effect of combined pressure/temperature treatments (200, 400 and 600 MPa, at 20 and 40 °C) on key physical and chemical characteristics of white cabbage (Brassica oleracea L. var. capitata alba). Thermal treatment (blanching) was also investigated and compared with high-pressure processing (HPP). HPP at 400 MPa and 20–40 °C caused significantly larger colour changes compared to any other pressure or thermal treatment. All pressure treatments induced a softening effect, whereas blanching did not significantly alter texture. Both blanching and pressure treatments resulted in a reduction in the levels of ascorbic acid, effect that was less pronounced for blanching and HPP at 600 MPa and 20–40 °C. HPP at 600 MPa resulted in significantly higher total phenol content, total antioxidant capacity and total isothiocyanate content compared to blanching. In summary, the colour and texture of white cabbage were better preserved by blanching. However, HPP at 600 MPa resulted in significantly higher levels of phytochemical compounds. The results of this study suggest that HPP may represent an attractive technology to process vegetable-based food products that better maintains important aspects related to the content of health-promoting compounds. This may be of particular relevance to the food industry sector involved in the development of convenient novel food products with excellent functional properties
Resumo:
Plasma catecholamines provide a reliable biomarker of sympathetic activity. The low circulating concentrations of catecholamines and analytical interferences require tedious sample preparation and long chromatographic runs to ensure their accurate quantification by HPLC with electrochemical detection. Published or commercially available methods relying on solid phase extraction technology lack sensitivity or require derivatization of catecholamine by hazardous reagents prior to tandem mass spectrometry (MS) analysis. Here, we manufactured a novel 96-well microplate device specifically designed to extract plasma catecholamines prior to their quantification by a new and highly sensitive ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method. Processing time, which included sample purification on activated aluminum oxide and elution, is less than 1 h per 96-well microplate. The UPLC-MS/MS analysis run time is 2.0 min per sample. This UPLC-MS/MS method does not require a derivatization step, reduces the turnaround time by 10-fold compared to conventional methods used for routine application, and allows catecholamine quantification in reduced plasma sample volumes (50-250 μL, e.g., from children and mice).
Resumo:
Among the variety of road users and vehicle types that travel on U.S. public roadways, slow moving vehicles (SMVs) present unique safety and operations issues. SMVs include vehicles that do not maintain a constant speed of 25 mph, such as large farm equipment, construction vehicles, or horse-drawn buggies. Though the number of crashes involving SMVs is relatively small, SMV crashes tend to be severe. Additionally, SMVs can be encountered regularly on non-Interstate/non-expressway public roadways, but motorists may not be accustomed to these vehicles. This project was designed to improve transportation safety for SMVs on Iowa’s public roadway system. This report includes a literature review that shows various SMV statistics and laws across the United States, a crash study based on three years of Iowa SMV crash data, and recommendations from the SMV community.
Resumo:
The capabilities of a high-resolution (HR), accurate mass spectrometer (Exactive-MS) operating in full scan MS mode was investigated for the quantitative LC/MS analysis of drugs in patients' plasma samples. A mass resolution of 50,000 (FWHM) at m/z 200 and a mass extracted window of 5 ppm around the theoretical m/z of each analyte were used to construct chromatograms for quantitation. The quantitative performance of the Exactive-MS was compared with that of a triple quadrupole mass spectrometer (TQ-MS), TSQ Quantum Discovery or Quantum Ultra, operating in the conventional selected reaction monitoring (SRM) mode. The study consisted of 17 therapeutic drugs including 8 antifungal agents (anidulafungin, caspofungin, fluconazole, itraconazole, hydroxyitraconazole posaconazole, voriconazole and voriconazole-N-oxide), 4 immunosuppressants (ciclosporine, everolimus, sirolimus and tacrolimus) and 5 protein kinase inhibitors (dasatinib, imatinib, nilotinib, sorafenib and sunitinib). The quantitative results obtained with HR-MS acquisition show comparable detection specificity, assay precision, accuracy, linearity and sensitivity to SRM acquisition. Importantly, HR-MS offers several benefits over TQ-MS technology: absence of SRM optimization, time saving when changing the analysis from one MS to another, more complete information of what is in the samples and easier troubleshooting. Our work demonstrates that U/HPLC coupled to Exactive HR-MS delivers comparable results to TQ-MS in routine quantitative drug analyses. Considering the advantages of HR-MS, these results suggest that, in the near future, there should be a shift in how routine quantitative analyses of small molecules, particularly for therapeutic drugs, are performed.
Resumo:
This Phase I report describes a preliminary evaluation of a new compaction monitoring system developed by Caterpillar, Inc. (CAT), for use as a quality control and quality assurance (QC/QA) tool during earthwork construction operations. The CAT compaction monitoring system consists of an instrumented roller with sensors to monitor machine power output in response to changes in soil machine interaction and is fitted with a global positioning system (GPS) to monitor roller location in real time. Three pilot tests were conducted using CAT’s compaction monitoring technology. Two of the sites were located in Peoria, Illinois, at the Caterpillar facilities. The third project was an actual earthwork grading project in West Des Moines, Iowa. Typical construction operations for all tests included the following steps: (1) aerate/till existing soil; (2) moisture condition soil with water truck (if too dry); (3) remix; (4) blade to level surface; and (5) compact soil using the CAT CP-533E roller instrumented with the compaction monitoring sensors and display screen. Test strips varied in loose lift thickness, water content, and length. The results of the study show that it is possible to evaluate soil compaction with relatively good accuracy using machine energy as an indicator, with the advantage of 100% coverage with results in real time. Additional field trials are necessary, however, to expand the range of correlations to other soil types, different roller configurations, roller speeds, lift thicknesses, and water contents. Further, with increased use of this technology, new QC/QA guidelines will need to be developed with a framework in statistical analysis. Results from Phase I revealed that the CAT compaction monitoring method has a high level of promise for use as a QC/QA tool but that additional testing is necessary in order to prove its validity under a wide range of field conditions. The Phase II work plan involves establishing a Technical Advisor Committee, developing a better understanding of the algorithms used, performing further testing in a controlled environment, testing on project sites in the Midwest, and developing QC/QA procedures.
Resumo:
Of the approximately 25,000 bridges in Iowa, 28% are classified as structurally deficient, functionally obsolete, or both. The state of Iowa thus follows the national trend of an aging infrastructure in dire need of repair or replacement with a relatively limited funding base. Therefore, there is a need to develop new materials with properties that may lead to longer life spans and reduced life-cycle costs. In addition, new methods for determining the condition of structures are needed to monitor the structures effectively and identify when the useful life of the structure has expired or other maintenance is needed. High-performance steel (HPS) has emerged as a material with enhanced weldability, weathering capabilities, and fracture toughness compared to conventional structural steels. In 2004, the Iowa Department of Transportation opened Iowa's first HPS girder bridge, the East 12th Street Bridge over I-235 in Des Moines, Iowa. The objective of this project was to evaluate HPS as a viable option for use in Iowa bridges with a continuous structural health monitoring (SHM) system. The scope of the project included documenting the construction of the East 12th Street Bridge and concurrently developing a remote, continuous SHM system using fiber-optic sensing technology to evaluate the structural performance of the bridge. The SHM system included bridge evaluation parameters, similar to design parameters used by bridge engineers, for evaluating the structure. Through the successful completion of this project, a baseline of bridge performance was established that can be used for continued long-term monitoring of the structure. In general, the structural performance of the HPS bridge exceeded the design parameters and is performing well. Although some problems were encountered with the SHM system, the system functions well and recommendations for improving the system have been made.
Resumo:
The current means and methods of verifying that high-strength bolts have been properly tightened are very laborious and time consuming. In some cases, the techniques require special equipment and, in other cases, the verification itself may be somewhat subjective. While some commercially available verification techniques do exist, these options still have some limitations and might be considered costly options. The main objectives of this project were to explore high-strength bolt-tightening and verification techniques and to investigate the feasibility of developing and implementing new alternatives. A literature search and a survey of state departments of transportation (DOTs) were conducted to collect information on various bolt-tightening techniques such that an understanding of available and under-development techniques could be obtained. During the literature review, the requirements for materials, inspection, and installation methods outlined in the Research Council on Structural Connections specification were also reviewed and summarized. To guide the search for finding new alternatives and technology development, a working group meeting was held at the Iowa State University Institute for Transportation October 12, 2015. During the meeting, topics central to the research were discussed with Iowa DOT engineers and other professionals who have relevant experiences.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
The rotational speed of high-speed electric machines is over 15 000 rpm. These machines are compact in size when compared to the power rate. As a consequence, the heat fluxes are at a high level and the adequacy of cooling becomes an important design criterion. In the high-speed machines, the air gap between the stator and rotor is a narrow flow channel. The cooling air is produced with a fan and the flow is then directed to the air gap. The flow in the gap does not provide sufficient cooling for the stator end windings, and therefore additional cooling is required. This study investigates the heat transfer and flow fields around the coil end windings when cooling jets are used. As a result, an innovative and new assembly is introduced for the cooling jets, with the benefits of a reduced amount of hot spots, a lower pressure drop, and hence a lower power need for the cooling fan. The gained information can also be applied to improve the cooling of electric machines through geometry modifications. The objective of the research is to determine the locations of the hot spots and to find out induced pressure losses with different jet alternatives. Several possibilities to arrange the extra cooling are considered. In the suggested approach cooling is provided by using a row of air jets. The air jets have three main tasks: to cool the coils effectively by direct impingement jets, to increase and cool down the flow that enters the coil end space through the air gap, and to ensure the correct distribution of the flow by forming an air curtain with additional jets. One important aim of this study is the arrangement of cooling jets in such manner that hot spots can be avoided to wide extent. This enables higher power density in high-speed motors. This cooling system can also be applied to the ordinary electric machines when efficient cooling is needed. The numerical calculations have been performed using a commercial Computational Fluid Dynamics software. Two geometries have been generated: cylindrical for the studied machine and Cartesian for the experimental model. The main parameters include the positions, arrangements and number of jets, the jet diameters, and the jet velocities. The investigated cases have been tested with two widely used turbulence models and using a computational grid of over 500 000 cells. The experimental tests have been made by using a simplified model for the end winding space with cooling jets. In the experiments, an emphasis has been given to flow visualisation. The computational analysis shows good agreement with the experimental results. Modelling of the cooling jet arrangement enables also a better understanding of the complex system of heat transfer at end winding space.
Resumo:
The antibody display technology (ADT) such as phage display (PD) has substantially improved the production of monoclonal antibodies (mAbs) and Ab fragments through bypassing several limitations associated with the traditional approach of hybridoma technology. In the current study, we capitalized on the PD technology to produce high affinity single chain variable fragment (scFv) against tumor necrosis factor-alpha (TNF- α), which is a potent pro-inflammatory cytokine and plays important role in various inflammatory diseases and malignancies. To pursue production of scFv antibody fragments against human TNF- α, we performed five rounds of biopanning using stepwise decreased amount of TNF-α (1 to 0.1 μ g), a semi-synthetic phage antibody library (Tomlinson I + J) and TG1 cells. Antibody clones were isolated and selected through enzyme-linked immunosorbent assay (ELISA) screening. The selected scFv antibody fragments were further characterized by means of ELISA, PCR, restriction fragment length polymorphism (RFLP) and Western blot analyses as well as fluorescence microscopy and flow cytometry. Based upon binding affinity to TNF-α , 15 clones were selected out of 50 positive clones enriched from PD in vitro selection. The selected scFvs displayed high specificity and binding affinity with Kd values at nm range to human TNF-α . The immunofluorescence analysis revealed significant binding of the selected scFv antibody fragments to the Raji B lymphoblasts. The effectiveness of the selected scFv fragments was further validated by flow cytometry analysis in the lipopolysaccharide (LPS) treated mouse fibroblast L929 cells. Based upon these findings, we propose the selected fully human anti-TNF-α scFv antibody fragments as potential immunotherapy agents that may be translated into preclinical/clinical applications.
Resumo:
Even though the research on innovation in services has expanded remarkably especially during the past two decades, there is still a need to increase understanding on the special characteristics of service innovation. In addition to studying innovation in service companies and industries, research has also recently focused more on services in innovation, as especially the significance of so-called knowledge intensive business services (KIBS) for the competitive edge of their clients, othercompanies, regions and even nations has been proved in several previous studies. This study focuses on studying technology-based KIBS firms, and technology andengineering consulting (TEC) sector in particular. These firms have multiple roles in innovation systems, and thus, there is also a need for in-depth studies that increase knowledge about the types and dimensions of service innovations as well as underlying mechanisms and procedures which make the innovations successful. The main aim of this study is to generate new knowledge in the fragmented research field of service innovation management by recognizing the different typesof innovations in TEC services and some of the enablers of and barriers to innovation capacity in the field, especially from the knowledge management perspective. The study also aims to shed light on some of the existing routines and new constructions needed for enhancing service innovation and knowledge processing activities in KIBS companies of the TEC sector. The main samples of data in this research include literature reviews and public data sources, and a qualitative research approach with exploratory case studies conducted with the help of the interviews at technology consulting companies in Singapore in 2006. These complement the qualitative interview data gathered previously in Finland during a larger research project in the years 2004-2005. The data is also supplemented by a survey conducted in Singapore. The respondents for the survey by Tan (2007) were technology consulting companies who operate in the Singapore region. The purpose ofthe quantitative part of the study was to validate and further examine specificaspects such as the influence of knowledge management activities on innovativeness and different types of service innovations, in which the technology consultancies are involved. Singapore is known as a South-east Asian knowledge hub and is thus a significant research area where several multinational knowledge-intensive service firms operate. Typically, the service innovations identified in the studied TEC firms were formed by several dimensions of innovations. In addition to technological aspects, innovations were, for instance, related to new client interfaces and service delivery processes. The main enablers of and barriers to innovation seem to be partly similar in Singaporean firms as compared to the earlier study of Finnish TEC firms. Empirical studies also brought forth the significance of various sources of knowledge and knowledge processing activities as themain driving forces of service innovation in technology-related KIBS firms. A framework was also developed to study the effect of knowledge processing capabilities as well as some moderators on the innovativeness of TEC firms. Especially efficient knowledge acquisition and environmental dynamism seem to influence the innovativeness of TEC firms positively. The results of the study also contributeto the present service innovation literature by focusing more on 'innovation within KIBs' rather than 'innovation through KIBS', which has been the typical viewpoint stressed in the previous literature. Additionally, the study provides several possibilities for further research.
Resumo:
The building industry has a particular interest in using clinching as a joining method for frame constructions of light-frame housing. Normally many clinch joints are required in joining of frames.In order to maximise the strength of the complete assembly, each clinch joint must be as sound as possible. Experimental testing is the main means of optimising a particular clinch joint. This includes shear strength testing and visual observation of joint cross-sections. The manufacturers of clinching equipment normally perform such experimental trials. Finite element analysis can also be used to optimise the tool geometry and the process parameter, X, which represents the thickness of the base of the joint. However, such procedures require dedicated software, a skilled operator, and test specimens in order to verify the finite element model. In addition, when using current technology several hours' computing time may be necessary. The objective of the study was to develop a simple calculation procedure for rapidly establishing an optimum value for the parameter X for a given tool combination. It should be possible to use the procedure on a daily basis, without stringent demands on the skill of the operator or the equipment. It is also desirable that the procedure would significantly decrease thenumber of shear strength tests required for verification. The experimental workinvolved tests in order to obtain an understanding of the behaviour of the sheets during clinching. The most notable observation concerned the stage of the process in which the upper sheet was initially bent, after which the deformation mechanism changed to shearing and elongation. The amount of deformation was measured relative to the original location of the upper sheet, and characterised as the C-measure. By understanding in detail the behaviour of the upper sheet, it waspossible to estimate a bending line function for the surface of the upper sheet. A procedure was developed, which makes it possible to estimate the process parameter X for each tool combination with a fixed die. The procedure is based on equating the volume of material on the punch side with the volume of the die. Detailed information concerning the behaviour of material on the punch side is required, assuming that the volume of die does not change during the process. The procedure was applied to shear strength testing of a sample material. The sample material was continuously hot-dip zinc-coated high-strength constructional steel,with a nominal thickness of 1.0 mm. The minimum Rp0.2 proof stress was 637 N/mm2. Such material has not yet been used extensively in light-frame housing, and little has been published on clinching of the material. The performance of the material is therefore of particular interest. Companies that use clinching on a daily basis stand to gain the greatest benefit from the procedure. By understanding the behaviour of sheets in different cases, it is possible to use data at an early stage for adjusting and optimising the process. In particular, the functionality of common tools can be increased since it is possible to characterise the complete range of existing tools. The study increases and broadens the amount ofbasic information concerning the clinching process. New approaches and points of view are presented and used for generating new knowledge.
Resumo:
The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.