18 resultados para Speech processing systems.

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Usean nykypäivän yrityksen tietojärjestelmäinfrastruktuuri on muotoutunut heterogeeniseksi ympäristöksi, jossa eri käyttöjärjestelmä- ja laitealustoilla toimii usean eri valmistajan toimittamia järjestelmiä. Heterogeenisen ympäristön hallitsemiseksi yritykseltä vaaditaan keskitettyä tietovarastoa, johon on tallennettu tietoa käytetystä järjestelmäympäristöstä sekä sen komponenteista. Tähän tarkoitukseen Microsoft toi markkinoille vuonna 1999 Active Directory 2000 -hakemistopalvelun. Heterogeenisessa ympäristössä käyttäjien autentikointi ja auktorisointi on erittäin vaativaa. Pahimmassa tapauksessa käyttäjällä voi olla kymmeniä käyttäjätunnus-salasana-yhdistelmiä yrityksen eri tietojärjestelmiin. Lisäksi jokaisessa tietojärjestelmässäon ylläpidettävä käyttäjäkohtaisia toimintavaltuuksia. Niin käyttäjän kuin ylläpitäjänkin näkökulmasta tällainen skenaario on painajainen. Tässä diplomityössä kartoitetaan mahdollisuuksia Oracle-tietokantojen käyttäjien autentikoinnin sekä auktorisoinnin keskittämiseksi Active Directory -hakemistopalveluun. Työssä tarkastellaan tarkoitukseen soveltuvia valmiita kaupallisia ratkaisuja sekä tutkitaan mahdollisuuksia oman ratkaisumallin toteuttamiseksi umpäristöstä löytyvien ohjelmointirajapintojen avulla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diplomityössä selvitetään Hämeenlinnan Energian kaukolämpö- ja maakaasuverkoston suunnitteluun ja kunnossapitoon liittyvien tietojärjestelmien kehitysmahdollisuuksia. Käydään läpi olemassa olevat järjestelmät ja todetaan olevan tarpeen uudistaa verkostolaskenta- ja kunnossapito-ohjelmia. Mahdolliseksi uudeksi verkostolaskentaohjelmaksi rajataan Komartekin Flowra 32 tai Process Visionin lämpö Nexus. Kunnossapidon osalta Komartekin Lämpökunto tai Kunnondatan Visual Maint. Tehtyjen ohjelmavertailujen jälkeen työssä esitetään Hämeenlinnan Energian uudeksi verkostolaskentaohjelmaksi lämpö Nexus ja kunnossapito-ohjelmaksi Visual Maint. Ohjelmahankinnoille esitetään käyttöönottosuunnitelmat. Olemassa olevien tietojärjestelmien osalta esitetään vaihtoehtoja, joilla voidaan lisätä ohjelmista saatavaa hyötyä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tapahtumat ovat perusta monille nykyaikaisille tietoteknisille palveluille. Yksittäinen tapahtuma voidaan nähdä työtehtävänä, joka käsittelevän järjestelmän tulee suorittaa. Tapahtumankäsittely pyrkii pitämään järjestelmän tunnetussa ja ristiriidattomassa tilassa. Tämä toteutetaan pitämällä huolta, että jokainen tapahtuma joko onnistuu tai epäonnistuu kokonaisuudessaan. Tapahtumankäsittelyjärjestelmät ovat kasvaneet ja yhtäaikaisten käsiteltävien tapahtumien määrä noussut palveluiden siirtyessä yhä enemmän tietoverkkoihin. Samalla järjestelmien kehittäminen ja ylläpito vaikeutuvat, jolloin kehittäjät tarvitsevat parempia työkaluja järjestelmän valvontaan. Tapahtumankäsittelyn seuranta pyrkii seuraamaan järjestelmän sisäistä toimintaa yksittäisen tapahtuman tai osatapahtuman tarkkuudella. Riippuen toteutuksesta kehittäjä voi joko tarkkailla järjestelmää reaaliaikaisesti tai jälkikäteen suorituksen perusteella tallennetun seurantatiedon avulla. Työssä esitellään tapahtumanseurantakomponentin suunnitteluprosessi ratkaisuineen, joka mahdollistaa tapahtumien suorituksen tarkkailun, seurantatiedon tallentamisen sekä tulosten tarkastelun jälkikäteen. Työ on toteutettu osaksi Syncron Tech Oy:n Syncware-ohjelmistoalustaa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study is to find out how sales management can be optimally supported with business information and knowledge. The first chapters of the study focus on theoretical literature about sales planning, sales steering, business intelligence, and knowledge management. The empirical part of the study is a case study for which the material was collected through interviews with the selected people of the company. The findings from the interviews were analyzed, and possible suggestions for solving the problems were made. The case study revealed that sales management requires a multitude of metrics and reports to steer the sales to the desired direction. The information sources can be internal and external, and the optimal solution for satisfying the information needs is a combination of both of these. The simple information should be turned into knowledge by merging the intellectual assets with the information from the firm’s transaction processing systems, in order to promote organizational learning and effective decision-making.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Novel word learning has been rarely studied in people with aphasia (PWA), although it can provide a relatively pure measure of their learning potential, and thereby contribute to the development of effective aphasia treatment methods. The main aim of the present thesis was to explore the capacity of PWA for associative learning of word–referent pairings and cognitive-linguistic factors related to it. More specifically, the thesis examined learning and long-term maintenance of the learned pairings, the role of lexical-semantic abilities in learning as well as acquisition of phonological versus semantic information in associative novel word learning. Furthermore, the effect of modality on associative novel word learning and the neural underpinnings of successful learning were explored. The learning experiments utilized the Ancient Farming Equipment (AFE) paradigm that employs drawings of unfamiliar referents and their unfamiliar names. Case studies of Finnishand English-speaking people with chronic aphasia (n = 6) were conducted in the investigation. The learning results of PWA were compared to those of healthy control participants, and active production of the novel words and their semantic definitions was used as learning outcome measures. PWA learned novel word–novel referent pairings, but the variation between individuals was very wide, from more modest outcomes (Studies I–II) up to levels on a par with healthy individuals (Studies III–IV). In incidental learning of semantic definitions, none of the PWA reached the performance level of the healthy control participants. Some PWA maintained part of the learning outcomes up to months post-training, and one individual showed full maintenance of the novel words at six months post-training (Study IV). Intact lexical-semantic processing skills promoted learning in PWA (Studies I–II) but poor phonological short-term memory capacities did not rule out novel word learning. In two PWA with successful learning and long-term maintenance of novel word–novel referent pairings, learning relied on orthographic input while auditory input led to significantly inferior learning outcomes (Studies III–IV). In one of these individuals, this previously undetected modalityspecific learning ability was successfully translated into training with familiar but inaccessible everyday words (Study IV). Functional magnetic resonance imaging revealed that this individual had a disconnected dorsal speech processing pathway in the left hemisphere, but a right-hemispheric neural network mediated successful novel word learning via reading. Finally, the results of Study III suggested that the cognitive-linguistic profile may not always predict the optimal learning channel for an individual with aphasia. Small-scale learning probes seem therefore useful in revealing functional learning channels in post-stroke aphasia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Operatiivisen tiedon tuottaminen loppukäyttäjille analyyttistä tarkastelua silmällä pitäen aiheuttaa ongelmia useille yrityksille. Diplomityö pyrkii ratkaisemaan ko. ongelman Teleste Oyj:ssä. Työ on jaettu kolmeen pääkappaleeseen. Kappale 2 selkiyttää On-Line Analytical Processing (OLAP)- käsitteen. Kappale 3 esittelee muutamia OLAP-tuotteiden valmistajia ja heidän arkkitehtuurejaan sekä tyypillisten sovellusalueiden lisäksi huomioon otettavia asioita OLAP käyttöönoton yhteydessä. Kappale 4, tuo esille varsinaisen ratkaisun. Teknisellä arkkitehtuurilla on merkittävä asema ratkaisun rakenteen kannalta. Tässä on sovellettu Microsoft:n tietovarasto kehysrakennetta. Kappaleen 4 edetessä, tapahtumakäsittelytieto muutetaan informaatioksi ja edelleen loppukäyttäjien tiedoksi. Loppukäyttäjät varustetaan tehokkaalla ja tosiaikaisella analysointityökalulla moniulotteisessa ympäristössä. Vaikka kiertonopeus otetaan työssä sovellusesimerkiksi, työ ei pyri löytämään optimaalista tasoa Telesten varastoille. Siitä huolimatta eräitä parannusehdotuksia mainitaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the study is to examine and increase knowledge on customer knowledge processing in B2B context from sales perspective. Further objectives include identifying possible inhibiting and enabling factors in each phase of the process. The theoretical framework is based on customer knowledge management literature. The study is a qualitative study, in which the research method utilized is a case study. The empirical part was implemented in a case company by conducting in-depth interviews with the company’s value-selling champions located internationally. Context was maintenance business. Altogether 17 interviews were conducted. The empirical findings indicate that customer knowledge processing has not been clearly defined within the maintenance business line. Main inhibiting factors in acquiring customer knowledge are lack of time and vast amount of customer knowledge received. Enabling factors recognized are good customer relationships and sales representatives’ communication skills. Internal dissemination of knowledge is mainly inhibited by lack of time and restrictions in customer relationship management systems. Enabling factors are composition of the sales team and updated customer knowledge. Inhibiting utilization is lack of goals to utilize the customer knowledge and a low quality of the knowledge. Moreover, customer knowledge is not systematically updated nor analysed. Management of customer knowledge is based on the CRM system. As implications of the study, it is suggested for the case company to define customer knowledge processing in order to support maintenance business process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three alpha2-adrenoceptor (alpha2-AR) subtypes belong to the G protein-coupled receptor superfamily and represent potential drug targets. These receptors have many vital physiological functions, but their actions are complex and often oppose each other. Current research is therefore driven towards discovering drugs that selectively interact with a specific subtype. Cell model systems can be used to evaluate a chemical compound's activity in complex biological systems. The aim of this thesis was to optimize and validate cell-based model systems and assays to investigate alpha2-ARs as drug targets. The use of immortalized cell lines as model systems is firmly established but poses several problems, since the protein of interest is expressed in a foreign environment, and thus essential components of receptor regulation or signaling cascades might be missing. Careful cell model validation is thus required; this was exemplified by three different approaches. In cells heterologously expressing alpha2A-ARs, it was noted that the transfection technique affected the test outcome; false negative adenylyl cyclase test results were produced unless a cell population expressing receptors in a homogenous fashion was used. Recombinant alpha2C-ARs in non-neuronal cells were retained inside the cells, and not expressed in the cell membrane, complicating investigation of this receptor subtype. Receptor expression enhancing proteins (REEPs) were found to be neuronalspecific adapter proteins that regulate the processing of the alpha2C-AR, resulting in an increased level of total receptor expression. Current trends call for the use of primary cells endogenously expressing the receptor of interest; therefore, primary human vascular smooth muscle cells (SMC) expressing alpha2-ARs were tested in a functional assay monitoring contractility with a myosin light chain phosphorylation assay. However, these cells were not compatible with this assay due to the loss of differentiation. A rat aortic SMC cell line transfected to express the human alpha2B-AR was adapted for the assay, and it was found that the alpha2-AR agonist, dexmedetomidine, evoked myosin light chain phosphorylation in this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized nanomedicine has been shown to provide advantages over traditional clinical imaging, diagnosis, and conventional medical treatment. Using nanoparticles can enhance and clarify the clinical targeting and imaging, and lead them exactly to the place in the body that is the goal of treatment. At the same time, one can reduce the side effects that usually occur in the parts of the body that are not targets for treatment. Nanoparticles are of a size that can penetrate into cells. Their surface functionalization offers a way to increase their sensitivity when detecting target molecules. In addition, it increases the potential for flexibility in particle design, their therapeutic function, and variation possibilities in diagnostics. Mesoporous nanoparticles of amorphous silica have attractive physical and chemical characteristics such as particle morphology, controllable pore size, and high surface area and pore volume. Additionally, the surface functionalization of silica nanoparticles is relatively straightforward, which enables optimization of the interaction between the particles and the biological system. The main goal of this study was to prepare traceable and targetable silica nanoparticles for medical applications with a special focus on particle dispersion stability, biocompatibility, and targeting capabilities. Nanoparticle properties are highly particle-size dependent and a good dispersion stability is a prerequisite for active therapeutic and diagnostic agents. In the study it was shown that traceable streptavidin-conjugated silica nanoparticles which exhibit a good dispersibility could be obtained by the suitable choice of a proper surface functionalization route. Theranostic nanoparticles should exhibit sufficient hydrolytic stability to effectively carry the medicine to the target cells after which they should disintegrate and dissolve. Furthermore, the surface groups should stay at the particle surface until the particle has been internalized by the cell in order to optimize cell specificity. Model particles with fluorescently-labeled regions were tested in vitro using light microscopy and image processing technology, which allowed a detailed study of the disintegration and dissolution process. The study showed that nanoparticles degrade more slowly outside, as compared to inside the cell. The main advantage of theranostic agents is their successful targeting in vitro and in vivo. Non-porous nanoparticles using monoclonal antibodies as guiding ligands were tested in vitro in order to follow their targeting ability and internalization. In addition to the targeting that was found successful, a specific internalization route for the particles could be detected. In the last part of the study, the objective was to clarify the feasibility of traceable mesoporous silica nanoparticles, loaded with a hydrophobic cancer drug, being applied for targeted drug delivery in vitro and in vivo. Particles were provided with a small molecular targeting ligand. In the study a significantly higher therapeutic effect could be achieved with nanoparticles compared to free drug. The nanoparticles were biocompatible and stayed in the tumor for a longer time than a free medicine did, before being eliminated by renal excretion. Overall, the results showed that mesoporous silica nanoparticles are biocompatible, biodegradable drug carriers and that cell specificity can be achieved both in vitro and in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since cellulose is a linear macromolecule it can be used as a material for regenerated cellulose fiber products e.g. in textile fibers or film manufacturing. Cellulose is not thermoformable, thus the manufacturing of these regenerated fibers is mainly possible through dissolution processes preceding the regeneration process. However, the dissolution of cellulose in common solvents is hindered due to inter- and intra-molecular hydrogen bonds in the cellulose chains, and relatively high crystallinity. Interestingly at subzero temperatures relatively dilute sodium hydroxide solutions can be used to dissolve cellulose to a certain extent. The objective of this work was to investigate the possible factors that govern the solubility of cellulose in aqueous NaOH and the solution stability. Cellulose-NaOH solutions have the tendency to form a gel over time and at elevated temperature, which creates challenges for further processing. The main target of this work was to achieve high solubility of cellulose in aqueous NaOH without excessively compromising the solution stability. In the literature survey an overview of the cellulose dissolution is given and possible factors contributing to the solubility and solution properties of cellulose in aqueous NaOH are reviewed. Furthermore, the concept of solution rheology is discussed. In the experimental part the focus was on the characterization of the used materials and properties of the prepared solutions mainly concentrating on cellulose solubility and solution stability.