1000 resultados para 1456
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.
Resumo:
This thesis examines coordination of systems development process in a contemporary software producing organization. The thesis consists of a series of empirical studies in which the actions, conceptions and artifacts of practitioners are analyzed using a theory-building case study research approach. The three phases of the thesis provide empirical observations on different aspects of systemsdevelopment. In the first phase is examined the role of architecture in coordination and cost estimation in multi-site environment. The second phase involves two studies on the evolving requirement understanding process and how to measure this process. The third phase summarizes the first two phases and concentrates on the role of methods and how practitioners work with them. All the phases provide evidence that current systems development method approaches are too naïve in looking at the complexity of the real world. In practice, development is influenced by opportunity and other contingent factors. The systems development processis not coordinated using phases and tasks defined in methods providing universal mechanism for managing this process like most of the method approaches assume.Instead, the studies suggest that managing systems development process happens through coordinating development activities using methods as tools. These studies contribute to the systems development methods by emphasizing the support of communication and collaboration between systems development participants. Methods should not describe the development activities and phases in a detail level, butshould include the higher level guidance for practitioners on how to act in different systems development environments.
Resumo:
Globalization and new information technologies mean that organizations have to face world-wide competition in rapidly transforming, unpredictable environments, and thus the ability to constantly generate novel and improved products, services and processes has become quintessential for organizational success. Performance in turbulent environments is, above all, influenced by the organization's capability for renewal. Renewal capability consists of the ability of the organization to replicate, adapt, develop and change its assets, capabilities and strategies. An organization with a high renewal capability can sustain its current success factors while at the same time building new strengths for the future. This capability does not only mean that the organization is able to respond to today's challenges and to keep up with the changes in its environment, but also that it can actas a forerunner by creating innovations, both at the tactical and strategic levels of operation and thereby change the rules of the market. However, even though it is widely agreed that the dynamic capability for continuous learning, development and renewal is a major source of competitive advantage, there is no widely shared view on how organizational renewal capability should be defined, and the field is characterized by a plethora of concepts and definitions. Furthermore,there is a lack of methods for systematically assessing organizational renewal capability. The dissertation aims to bridge these gaps in the existing research by constructing an integrative theoretical framework for organizational renewal capability and by presenting a method for modeling and measuring this capability. The viability of the measurement tool is demonstrated in several contexts, andthe framework is also applied to assess renewal in inter-organizational networks. In this dissertation, organizational renewal capability is examined by drawing on three complimentary theoretical perspectives: knowledge management, strategic management and intellectual capital. The knowledge management perspective considers knowledge as inherently social and activity-based, and focuses on the organizational processes associated with its application and development. Within this framework, organizational renewal capability is understood as the capacity for flexible knowledge integration and creation. The strategic management perspective, on the other hand, approaches knowledge in organizations from the standpoint of its implications for the creation of competitive advantage. In this approach, organizational renewal is framed as the dynamic capability of firms. The intellectual capital perspective is focused on exploring how intangible assets can be measured, reported and communicated. From this vantage point, renewal capability is comprehended as the dynamic dimension of intellectual capital, which consists of the capability to maintain, modify and create knowledge assets. Each of the perspectives significantly contributes to the understanding of organizationalrenewal capability, and the integrative approach presented in this dissertationcontributes to the individual perspectives as well as to the understanding of organizational renewal capability as a whole.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and apermanent magnet synchronous motor. The suitability of these two motor types indynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.
Resumo:
The future of high technology welded constructions will be characterised by higher strength materials and improved weld quality with respect to fatigue resistance. The expected implementation of high quality high strength steel welds will require that more attention be given to the issues of crack initiation and mechanical mismatching. Experiments and finite element analyses were performed within the framework of continuum damage mechanics to investigate the effect of mismatching of welded joints on void nucleation and coalescence during monotonic loading. It was found that the damage of undermatched joints mainly occurred in the sandwich layer and the damageresistance of the joints decreases with the decrease of the sandwich layer width. The damage of over-matched joints mainly occurred in the base metal adjacent to the sandwich layer and the damage resistance of the joints increases with thedecrease of the sandwich layer width. The mechanisms of the initiation of the micro voids/cracks were found to be cracking of the inclusions or the embrittled second phase, and the debonding of the inclusions from the matrix. Experimental fatigue crack growth rate testing showed that the fatigue life of under-matched central crack panel specimens is longer than that of over-matched and even-matched specimens. Further investigation by the elastic-plastic finite element analysis indicated that fatigue crack closure, which originated from the inhomogeneousyielding adjacent to the crack tip, played an important role in the fatigue crack propagation. The applicability of the J integral concept to the mismatched specimens with crack extension under cyclic loading was assessed. The concept of fatigue class used by the International Institute of Welding was introduced in the parametric numerical analysis of several welded joints. The effect of weld geometry and load condition on fatigue strength of ferrite-pearlite steel joints was systematically evaluated based on linear elastic fracture mechanics. Joint types included lap joints, angle joints and butt joints. Various combinations of the tensile and bending loads were considered during the evaluation with the emphasis focused on the existence of both root and toe cracks. For a lap joint with asmall lack-of-penetration, a reasonably large weld leg and smaller flank angle were recommended for engineering practice in order to achieve higher fatigue strength. It was found that the fatigue strength of the angle joint depended strongly on the location and orientation of the preexisting crack-like welding defects, even if the joint was welded with full penetration. It is commonly believed that the double sided butt welds can have significantly higher fatigue strength than that of a single sided welds, but fatigue crack initiation and propagation can originate from the weld root if the welding procedure results in a partial penetration. It is clearly shown that the fatigue strength of the butt joint could be improved remarkably by ensuring full penetration. Nevertheless, increasing the fatigue strength of a butt joint by increasing the size of the weld is an uneconomical alternative.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
The objective of industrial crystallization is to obtain a crystalline product which has the desired crystal size distribution, mean crystal size, crystal shape, purity, polymorphic and pseudopolymorphic form. Effective control of the product quality requires an understanding of the thermodynamics of the crystallizing system and the effects of operation parameters on the crystalline product properties. Therefore, obtaining reliable in-line information about crystal properties and supersaturation, which is the driving force of crystallization, would be very advantageous. Advanced techniques, such asRaman spectroscopy, attenuated total reflection Fourier transform infrared (ATR FTIR) spectroscopy, and in-line imaging techniques, offer great potential for obtaining reliable information during crystallization, and thus giving a better understanding of the fundamental mechanisms (nucleation and crystal growth) involved. In the present work, the relative stability of anhydrate and dihydrate carbamazepine in mixed solvents containing water and ethanol were investigated. The kinetics of the solvent mediated phase transformation of the anhydrate to hydrate in the mixed solvents was studied using an in-line Raman immersion probe. The effects of the operation parameters in terms of solvent composition, temperature and the use of certain additives on the phase transformation kineticswere explored. Comparison of the off-line measured solute concentration and the solid-phase composition measured by in-line Raman spectroscopy allowedthe identification of the fundamental processes during the phase transformation. The effects of thermodynamic and kinetic factors on the anhydrate/hydrate phase of carbamazepine crystals during cooling crystallization were also investigated. The effect of certain additives on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) wasinvestigated. The crystal growth rate of a certain crystal face was determined from images taken with an in-line video microscope. An in-line image processing method was developed to characterize the size and shape of thecrystals. An ATR FTIR and a laser reflection particle size analyzer were used to study the effects of cooling modes and seeding parameters onthe final crystal size distribution of an organic compound C15. Based on the obtained results, an operation condition was proposed which gives improved product property in terms of increased mean crystal size and narrowersize distribution.
Resumo:
Tutkimuksen aiheeksi valittiin yrityksen strategiset valinnat eli ne näkökulmat ja keinot, joiden avulla yritys toimii selviytyäkseen kilpailussa. Erityisesti tutkittiin osaamisen ja yleensäkin muun kuin fyysisiin tuotteisiin liittyvien yrityksen ominaisuuksien ja toimintatapojen liittymistä yrityksen menestymiseen. Lisäksi tutkittiin yrityksissä noudatettuja liikkeenjohdon käytäntöjä ja menetelmiä suhteessa strategisiin valintoihin. Tutkimuksen kohdetoimialaksi valittiin oli leipomoala, joka on suomalaisen elintarviketeollisuuden toimipaikkojen lukumäärällä mitattuna runsaslukuisin yksittäinen toimiala. Tutkimus toteutettiin lomakekyselynä, johon vastasi 90 suomalaista leipomoalan yritystä, jotka työllistivät enintään 49 henkilöä. Ryhmittelyanalyysin avulla kohdeyrityksistä muodostettiin neljä yritysryhmää erityisesti dynaamisuuden, innovatiivisuuden ja kustannustehokkuuden suhteen. Tutkimustulosten perusteella saatiin empiiristä evidenssiä sen suhteen, että innovatiivisuus ja dynaamisuus pienten ja keskisuurten leipomoalan yritysten strategiavalintana tuo yritykselle lisää valmiuksia yrityksen toiminnan kehittämiseen, mikäli yrityksen toimintaympäristö, erityisesti kilpailuympäristö muuttuu. Liikkeenjohdon, erityisesti laskentatoimen, käytännöt ja menetelmät todettiin tutkimuksen tulosten perusteella merkityksellisiksi tekijöiksi, joiden kautta strategiset valinnat vaikuttavat yrityksen menestymiseen. Näitä käytäntöjä ja menetelmiä voidaan näin ollen pitää yrityksen tärkeänä voimavarana ja osaamisen ilmentymänä.
Resumo:
Luokittelujärjestelmää suunniteltaessa tarkoituksena on rakentaa systeemi, joka pystyy ratkaisemaan mahdollisimman tarkasti tutkittavan ongelma-alueen. Hahmontunnistuksessa tunnistusjärjestelmän ydin on luokitin. Luokittelun sovellusaluekenttä on varsin laaja. Luokitinta tarvitaan mm. hahmontunnistusjärjestelmissä, joista kuvankäsittely toimii hyvänä esimerkkinä. Myös lääketieteen parissa tarkkaa luokittelua tarvitaan paljon. Esimerkiksi potilaan oireiden diagnosointiin tarvitaan luokitin, joka pystyy mittaustuloksista päättelemään mahdollisimman tarkasti, onko potilaalla kyseinen oire vai ei. Väitöskirjassa on tehty similaarisuusmittoihin perustuva luokitin ja sen toimintaa on tarkasteltu mm. lääketieteen paristatulevilla data-aineistoilla, joissa luokittelutehtävänä on tunnistaa potilaan oireen laatu. Väitöskirjassa esitetyn luokittimen etuna on sen yksinkertainen rakenne, josta johtuen se on helppo tehdä sekä ymmärtää. Toinen etu on luokittimentarkkuus. Luokitin saadaan luokittelemaan useita eri ongelmia hyvin tarkasti. Tämä on tärkeää varsinkin lääketieteen parissa, missä jo pieni tarkkuuden parannus luokittelutuloksessa on erittäin tärkeää. Väitöskirjassa ontutkittu useita eri mittoja, joilla voidaan mitata samankaltaisuutta. Mitoille löytyy myös useita parametreja, joille voidaan etsiä juuri kyseiseen luokitteluongelmaan sopivat arvot. Tämä parametrien optimointi ongelma-alueeseen sopivaksi voidaan suorittaa mm. evoluutionääri- algoritmeja käyttäen. Kyseisessä työssä tähän on käytetty geneettistä algoritmia ja differentiaali-evoluutioalgoritmia. Luokittimen etuna on sen joustavuus. Ongelma-alueelle on helppo vaihtaa similaarisuusmitta, jos kyseinen mitta ei ole sopiva tutkittavaan ongelma-alueeseen. Myös eri mittojen parametrien optimointi voi parantaa tuloksia huomattavasti. Kun käytetään eri esikäsittelymenetelmiä ennen luokittelua, tuloksia pystytään parantamaan.
Resumo:
In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.