101 resultados para level-sets
Resumo:
Työssä tutkittiin tehokasta tietojohtamista globaalin metsäteollisuusyrityksen tutkimus ja kehitys verkostossa. Työn tavoitteena oli rakentaa kuvaus tutkimus ja kehitys sisällön hallintaan kohdeyrityksen käyttämän tietojohtamisohjelmiston avulla. Ensin selvitettiin käsitteitä tietämys ja tietojohtaminen kirjallisuuden avulla. Selvityksen perusteella esitettiin prosessimalli, jolla tietämystä voidaan tehokkaasti hallita yrityksessä. Seuraavaksi analysoitiin tietojohtamisen asettamia vaatimuksia informaatioteknologialle ja informaatioteknologian roolia prosessimallissa. Verkoston vaatimukset tietojohtamista kohtaan selvitettiin haastattelemalla yrityksen avainhenkilöitä. Haastatteluiden perusteella järjestelmän tuli tehokkaasti tukea virtuaalisten projektiryhmien työskentelyä, mahdollistaa tehtaiden välinen tietämyksen jakaminen ja tukea järjestelmään syötetyn sisällön hallintaa. Ensiksi järjestelmän käyttöliittymän rakenne ja salaukset muokattiin vastaamaan verkoston tarpeita. Rakenne tarjoaa työalueen työryhmille ja alueet tehtaiden väliseen tietämyksen jakamiseen. Sisällönhallintaa varten järjestelmään kehitettiin kategoria, profiloitu portaali ja valmiiksi määriteltyjä hakuja. Kehitetty malli tehostaa projektiryhmien työskentelyä, mahdollistaa olemassa olevan tietämyksen hyväksikäytön tehdastasolla sekä helpottaa tutkimus ja kehitys aktiviteettien seurantaa. Toimenpide-ehdotuksina esitetään järjestelmän integrointia tehtaiden operatiivisiin ohjausjärjestelmiin ja ohjelmiston käyttöönottoa tehdastason projektinhallinta työkaluksi.Ehdotusten tavoitteena on varmistaa sekä tehokas tietämyksen jakaminen tehtaiden välillä että tehokas tietojohtaminen tehdastasolla.
Resumo:
Diplomityö tarkastelee säikeistettyä ohjelmointia rinnakkaisohjelmoinnin ylemmällä hierarkiatasolla tarkastellen erityisesti hypersäikeistysteknologiaa. Työssä tarkastellaan hypersäikeistyksen hyviä ja huonoja puolia sekä sen vaikutuksia rinnakkaisalgoritmeihin. Työn tavoitteena oli ymmärtää Intel Pentium 4 prosessorin hypersäikeistyksen toteutus ja mahdollistaa sen hyödyntäminen, missä se tuo suorituskyvyllistä etua. Työssä kerättiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestejä eri olosuhteissa (muistin käsittely, kääntäjän asetukset, ympäristömuuttujat...). Työssä tarkasteltiin kahdentyyppisiä algoritmeja: matriisioperaatioita ja lajittelua. Näissä sovelluksissa on säännöllinen muistinkäyttökuvio, mikä on kaksiteräinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskykyä. Syynä siihen on nykyaikaisten prosessorien erittäin hyvä raaka suorituskyky säännöllistä dataa käsiteltäessä, mutta muistiarkkitehtuuria rajoittaa välimuistien koko ja useat puskurit. Kun ongelman koko ylittää tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvystä.
Resumo:
Stora Enso käyttää tehtaillaan TietoEnatorin luomaa Fenix-toiminnanohjaus-järjestelmää. Fenix on monimutkainen järjestelmä, joka sisältää mm. tuotannon-ohjausosion, jolla luodaan paperikoneille tuotantosuunnitelmia. Fenix-projektiin on syntynyt sivutuotteena PartnerWeb-projekti, jonka tarkoituksena on julkaista joitakin Fenixin palveluita internetissä. Kohderyhmänä ovat pääasiassa Stora Enson suurimmat asiakkaat, heidän partnerinsa. Tämän työn tavoitteena on rakentaa teoriatasolla toimivat sovellukset internet- ja WAP-ympäristöihin koskien PartnerWebin tuotantosuunnitelmaosiota. Tavoitteena on myös tutkia, mitä vaatimuksia kyseisessä ympäristössä julkaistava sovellus asettaa sekä missä muodossa ja mitä tietoa partnereille esitetään. Lähestymistapa on varsin tietoturvapainotteinen, johtuen Fenixin tärkeydestä Stora Ensolle. Työn tuloksena saatiin luotua teoriatason käyttöliittymät internet- ja WAP-ympäristöihin. Tuloksena saatiin myös tietoturvallinen arkkitehtuuri. Sovellusten osalta työ jatkuu edelleen, tavoitteena on rakentaa luotuihin käyttöliittymiin toimivat sovellukset, jotka käyttävät Fenix-palveluita.
Resumo:
Tämän tutkintotyön tavoitteena on selvittää operatiivista ostotoimintaa, joka sisältää oikea-aikaisen tilausrytmin ja tasapainoisen tilausmäärän määrityksen sekä saapuvan tavaravirran mukauttamisen myyntiin tai kulutukseen kokonaisuutena. Tässä tutkimuksessa tarkastellaan myös varastojen merkitystä ja ostotoiminnan kannalta keskeisimpiä tehokkaaseen varastonohjaukseen liittyviä tunnuslukuja. Tutkimus sisältää lääkkeiden toimitusketjun kuvauksen, koska se poikkeaa merkittävästi muista toimialoista. Tämä tutkintotyö on syntetisoiva kirjallisuustutkimus. Tutkintotyön empiirisessä osassa analysoidaan aluksi case-yrityksen vaihto-omaisuusvaraston ohjausta ostotoiminnan näkökulmasta ABC analyysiin pohjautuen. Tämän jälkeen ostotoimintaa analysoidaan tarkemmin tiettyjen toimittajien osalta. Lopuksi laaditaan optimaalinen tilausrytmi ja tilausmäärät sekä ostobudjetti yhden toimittajan tuotteille. ABC analyysia voidaan hyödyntää työkaluna määriteltäessä miten erilaisten tuotteiden materiaalivirtoja tulisi ostotoiminnan kannalta ohjata. Analyysi perustuu resurssien keskittämiseen sinne missä tuotto on suurin.
Resumo:
Tutkielman tavoitteena oli tarkastella innovaatioiden leviämismallien ennustetarkkuuteen vaikuttavia tekijöitä. Tutkielmassa ennustettiin logistisella mallilla matkapuhelinliittymien leviämistä kolmessa Euroopan maassa: Suomessa, Ranskassa ja Kreikassa. Teoriaosa keskittyi innovaatioiden leviämisen ennustamiseen leviämismallien avulla. Erityisesti painotettiin mallien ennustuskykyä ja niiden käytettävyyttä eri tilanteissa. Empiirisessä osassa keskityttiin ennustamiseen logistisella leviämismallilla, joka kalibroitiin eri tavoin koostetuilla aikasarjoilla. Näin tehtyjä ennusteita tarkasteltiin tiedon kokoamistasojen vaikutusten selvittämiseksi. Tutkimusasetelma oli empiirinen, mikä sisälsi logistisen leviämismallin ennustetarkkuuden tutkimista otosdatan kokoamistasoa muunnellen. Leviämismalliin syötettävä data voidaan kerätä kuukausittain ja operaattorikohtaisesti vaikuttamatta ennustetarkkuuteen. Dataan on sisällytettävä leviämiskäyrän käännöskohta, eli pitkän aikavälin huippukysyntäpiste.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Adolescence is an important time for acquiring high peak bone mass. Physical activity is known to be beneficial to bone development. The effect of estrogen-progestin contraceptives (EPC) is still controversial. Altogether 142 (52 gymnasts, 46 runners, and 42 controls) adolescent women participated in this study, which is based on two 7-year (n =142), one 6-year (n =140) and one 4-year (n =122) follow-ups. Information on physical activity, menstrual history, sexual maturation, nutrition, living habits and health status was obtained through questionnaires and interviews. The bone mineral density (BMD) and content (BMC) of lumbar spine (LS) and femoral neck (FN) were measured by dual- energy X-ray absoptiometry. Calcaneal sonographic measurements were also made. The physical activity of the athletes participating in this study decreased after 3-year follow-up. High-impact exercise was beneficial to bones. LS and FN BMC was higher in gymnasts than in controls during the follow-up. Reduction in physical activity had negative effects on bone mass. LS and FN BMC increased less in the group having reduced their physical activity more than 50%, compared with those continuing at the previous level (1.69 g, p=0.021; 0.14 g, p=0.015, respectively). The amount of physical activity was the only significant parameter accounting for the calcaneal sonography measurements at 6-year follow-up (11.3%) and reduced activity level was associated with lower sonographic values. Long-term low-dose EPC use seemed to prevent normal bone mass acquisition. There was a significant trend towards a smaller increase in LS and FN BMC among long-term EPC users. In conclusion, this study confirms that high-impact exercise is beneficial to bones and that the benefits are partly maintained even after a clear reduction in training level at least for 4 years. Continued exercise is needed to retain all acquired benefits. The bone mass gained and maintained can possibly be maximized in adolescence by implementing high-impact exercise for youngsters. The peak bone mass of the young women participating in the study may be reached before the age of 20. Use of low-dose EPCs seems to suppress normal bone mass acquisition.
Resumo:
The importance of the regional level in research has risen in the last few decades and a vast literature in the fields of, for instance, evolutionary and institutional economics, network theories, innovations and learning systems, as well as sociology, has focused on regional level questions. Recently the policy makers and regional actors have also began to pay increasing attention to the knowledge economy and its needs, in general, and the connectivity and support structures of regional clusters in particular. Nowadays knowledge is generally considered as the most important source of competitive advantage, but even the most specialised forms of knowledge are becoming a short-lived resource for example due to the accelerating pace of technological change. This emphasizes the need of foresight activities in national, regional and organizational levels and the integration of foresight and innovation activities. In regional setting this development sets great challenges especially in those regions having no university and thus usually very limited resources for research activities. Also the research problem of this dissertation is related to the need to better incorporate the information produced by foresight process to facilitate and to be used in regional practice-based innovation processes. This dissertation is a constructive case study the case being Lahti region and a network facilitating innovation policy adopted in that region. Dissertation consists of a summary and five articles and during the research process a construct or a conceptual model for solving this real life problem has been developed. It is also being implemented as part of the network facilitating innovation policy in the Lahti region.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.
Resumo:
Neuropeptide Y (NPY) is a widely expressed neurotransmitter in the central and peripheral nervous systems. Thymidine 1128 to cytocine substitution in the signal sequence of the preproNPY results in a single amino acid change where leucine is changed to proline. This L7P change leads to a conformational change of the signal sequence which can have an effect on the intracellular processing of NPY. The L7P polymorphism was originally associated with higher total and LDL cholesterol levels in obese subjects. It has also been associated with several other physiological and pathophysiological responses such as atherosclerosis and T2 diabetes. However, the changes on the cellular level due to the preproNPY signal sequence L7P polymorphism were not known. The aims of the current thesis were to study the effects of the [p.L7]+[p.L7] and the [p.L7]+[p.P7] genotypes in primary cultured and genotyped human umbilical vein endothelial cells (HUVEC), in neuroblastoma (SK-N-BE(2)) cells and in fibroblast (CHO-K1) cells. Also, the putative effects of the L7P polymorphism on proliferation, apoptosis and LDL and nitric oxide metabolism were investigated. In the course of the studies a fragment of NPY targeted to mitochondria was found. With the putative mitochondrial NPY fragment the aim was to study the translational preferences and the mobility of the protein. The intracellular distribution of NPY between the [p.L7]+[p.L7] and the [p.L7]+[p.P7] genotypes was found to be different. NPY immunoreactivity was prominent in the [p.L7]+[p.P7] cells while the proNPY immunoreactivity was prominent in the [p.L7]+[p.L7] genotype cells. In the proliferation experiments there was a difference in the [p.L7]+[p.L7] genotype cells between early and late passage (aged) cells; the proliferation was raised in the aged cells. NPY increased the growth of the cells with the [p.L7]+[p.P7] genotype. Apoptosis did not seem to differ between the genotypes, but in the aged cells with the [p.L7]+[p.L7] genotype, LDL uptake was found to be elevated. Furthermore, the genotype seemed to have a strong effect on the nitric oxide metabolism. The results indicated that the mobility of NPY protein inside the cells was increased within the P7 containing constructs. The existence of the mitochondria targeted NPY fragment was verified, and translational preferences were proved to be due to the origin of the cells. Cell of neuronal origin preferred the translation of mature NPY (NPY1-36) when compared to the non neuronal cells that translated both, NPY and the mitochondrial fragment of NPY. The mobility of the mitochondrial fragment was found to be minimal. The functionality of the mitochondrial NPY fragment remains to be investigated. L7P polymorphism in the preproNPY causes a series of intracellular changes. These changes may contribute to the state of cellular senescence, vascular tone and lead to endothelial dysfunction and even to increased susceptibility to diseases, like atherosclerosis and T2 diabetes.
Resumo:
The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '
Resumo:
The productivity, quality and cost efficiency of welding work are critical for metal industry today. Welding processes must get more effective and this can be done by mechanization and automation. Those systems are always expensive and they have to pay the investment back. In this case it is really important to optimize the needed intelligence and this way needed automation level, so that a company will get the best profit. This intelligence and automation level was earlier classified in several different ways which are not useful for optimizing the process of automation or mechanization of welding. In this study the intelligence of a welding system is defined in a new way to enable the welding system to produce a weld good enough. In this study a new way is developed to classify and select the internal intelligence level of a welding system needed to produce the weld efficiently. This classification contains the possible need of human work and its effect to the weld and its quality but does not exclude any different welding processes or methods. In this study a totally new way is developed to calculate the best optimization for the needed intelligence level in welding. The target of this optimization is the best possible productivity and quality and still an economically optimized solution for several different cases. This new optimizing method is based on grounds of product type, economical productivity, the batch size of products, quality and criteria of usage. Intelligence classification and optimization were never earlier made by grounds of a made product. Now it is possible to find the best type of welding system needed to welddifferent types of products. This calculation process is a universal way for optimizing needed automation or mechanization level when improving productivity of welding. This study helps the industry to improve productivity, quality and cost efficiency of welding workshops.