53 resultados para fuzzy based evaluation method
Resumo:
[Abstract]
Resumo:
Abstract
Resumo:
Seudullinen innovaatio on monimutkainen ilmiö, joka usein sijaitsee paikallisten toimijoiden keskinäisen vuorovaikutuksen kentässä. Täten sitä on perinteisesti pidetty vaikeasti mitattavana ilmiönä. Työssä sovellettiin Data Envelopment Analysis menetelmää, joka on osoittautunut aiemmin menestyksekkääksi tapauksissa, joissa mitattavien syötteiden ja tuotteiden väliset suhteet eivät ole olleet ilmeisiä. Työssä luotiin konseptuaalinen malli seudullisen innovaation syötteistä ja tuotteista, jonka perusteella valittiin 12 tilastollisen muuttujan mittaristo. Käyttäen Eurostat:ia datalähteenä, lähdedata kahdeksaan muuttujsta saatiin seudullisella tasolla, sekä mittaristoa täydennettiin yhdellä kansallisella muuttujalla. Arviointi suoritettiin lopulta 45 eurooppalaiselle seudulle. Tutkimuksen painopiste oli arvioida DEA-menetelmän soveltuvuutta innovaatio-järjestelmän mittaamiseen, sillä menetelmää ei ole aiemmin sovellettu vastaavassa tapauksessa. Ensimmäiset tulokset osoittivat ylipäätään liiallisen korkeita tehok-kuuslukuja. Korjaustoimenpiteitä erottelutarkkuuden parantamiseksi esiteltiin ja sovellettiin, jonka jälkeen saatiin realistisempia tuloksia ja ranking-lista arvioitavista seuduista. DEA-menetelmän todettiin olevan tehokas ja kiinnostava työkalu arviointikäytäntöjen ja innovaatiopolitiikan kehittämiseen, sikäli kun datan saatavuusongelmat saadaan ratkaistua sekä itse mallia tarkennettua.
Resumo:
Pro gradu -tutkielman tavoitteena oli selvittää millaisia muutoksia uusi SAP-pohjainen järjestelmä aiheuttaa metsäteollisuusyrityksen hankinta-prosesseissa ja ostajien työssä. Tilannetta tarkasteltiin myös liiketoiminta-prosessien uudistamisprojektina. Tutkimus oli kvalitatiivinen case-tutkimus, jonka lähteinä olivat haastattelut ja prosessikuvaukset. Hankintaprosessit on pyritty standardisoimaan ja kuvaamaan tarkasti, koska järjestelmä on tarkoitus ottaa vähitellen käyttöön yrityksen kaikissa toimipisteissä. Teoriaosassa käsiteltiin globaalia hankintaa, erilaisia tilauksia, sähköistä liiketoimintaa ja uudistettujen liiketoimintaprosessien käyttöönottoa sekä siihen liittyviä haasteita. Yritys pyrkii kehittämään hankintatoimintaansa ja hyödyntämään kokonsa tuomia mittakaavaetuja, uusi järjestelmä on merkittävä osa tätä kehitystyötä. Haastattelujen perusteella uusi järjestelmä on toivottu ja siihen kohdistuu paljon odotuksia. Järjestelmän käyttöönotto tulee olemaanhaastava tehtävä, koska järjestelmän käyttäjiä on paljon ja loppukäyttäjät tekevät entistä enemmän tapahtumia järjestelmään tilausaloitteiden ja kotiinkutsujenmuodossa. Tehdasostajien roolissa tapahtuu muutoksia, rutiinitilaamisen vähentyessä he toimivat tietoa molempiin suuntiin jakavina linkkeinä keskitetyn hankinta-organisaation ja tehtaan välissä.
Resumo:
The changing business environment demands that chemical industrial processes be designed such that they enable the attainment of multi-objective requirements and the enhancement of innovativedesign activities. The requirements and key issues for conceptual process synthesis have changed and are no longer those of conventional process design; there is an increased emphasis on innovative research to develop new concepts, novel techniques and processes. A central issue, how to enhance the creativity of the design process, requires further research into methodologies. The thesis presentsa conflict-based methodology for conceptual process synthesis. The motivation of the work is to support decision-making in design and synthesis and to enhance the creativity of design activities. It deals with the multi-objective requirements and combinatorially complex nature of process synthesis. The work is carriedout based on a new concept and design paradigm adapted from Theory of InventiveProblem Solving methodology (TRIZ). TRIZ is claimed to be a `systematic creativity' framework thanks to its knowledge based and evolutionary-directed nature. The conflict concept, when applied to process synthesis, throws new lights on design problems and activities. The conflict model is proposed as a way of describing design problems and handling design information. The design tasks are represented as groups of conflicts and conflict table is built as the design tool. The general design paradigm is formulated to handle conflicts in both the early and detailed design stages. The methodology developed reflects the conflict nature of process design and synthesis. The method is implemented and verified through case studies of distillation system design, reactor/separator network design and waste minimization. Handling the various levels of conflicts evolve possible design alternatives in a systematic procedure which consists of establishing an efficient and compact solution space for the detailed design stage. The approach also provides the information to bridge the gap between the application of qualitative knowledge in the early stage and quantitative techniques in the detailed design stage. Enhancement of creativity is realized through the better understanding of the design problems gained from the conflict concept and in the improvement in engineering design practice via the systematic nature of the approach.
Resumo:
Requirements-relatedissues have been found the third most important risk factor in software projects and as the biggest reason for software project failures. This is not a surprise since; requirements engineering (RE) practices have been reported deficient inmore than 75% of all; enterprises. A problem analysis on small and low maturitysoftware organizations revealed two; central reasons for not starting process improvement efforts: lack of resources and uncertainty; about process improvementeffort paybacks.; In the constructive part of the study a basic RE method, BaRE, was developed to provide an; easy to adopt way to introduce basic systematic RE practices in small and low maturity; organizations. Based on diffusion of innovations literature, thirteen desirable characteristics; were identified for the solution and the method was implemented in five key components:; requirements document template, requirements development practices, requirements; management practices, tool support for requirements management, and training.; The empirical evaluation of the BaRE method was conducted in three industrial case studies. In; this evaluation, two companies established a completely new RE infrastructure following the; suggested practices while the third company conducted continued requirements document; template development based on the provided template and used it extensively in practice. The; real benefits of the adoption of the method were visible in the companies in four to six months; from the start of the evaluation project, and the two small companies in the project completed; their improvement efforts with an input equal to about one person month. The collected dataon; the case studies indicates that the companies implemented new practices with little adaptations; and little effort. Thus it can be concluded that the constructed BaRE method is indeed easy to; adopt and it can help introduce basic systematic RE practices in small organizations.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Seudullinen innovaatio on monimutkainen ilmiö, joka usein sijaitsee paikallisten toimijoiden keskinäisen vuorovaikutuksen kentässä. Täten sitä on perinteisesti pidetty vaikeasti mitattavana ilmiönä. Työssä sovellettiin Data Envelopment Analysis menetelmää, joka on osoittautunut aiemmin menestyksekkääksi tapauksissa, joissa mitattavien syötteiden ja tuotteiden väliset suhteet eivät ole olleet ilmeisiä. Työssä luotiin konseptuaalinen malli seudullisen innovaation syötteistä ja tuotteista, jonka perusteella valittiin 12 tilastollisen muuttujan mittaristo. Käyttäen Eurostat:ia datalähteenä, lähdedata kahdeksaan muuttujsta saatiin seudullisella tasolla, sekä mittaristoa täydennettiin yhdellä kansallisella muuttujalla. Arviointi suoritettiin lopulta 45 eurooppalaiselle seudulle. Tutkimuksen painopiste oli arvioida DEA-menetelmän soveltuvuutta innovaatiojärjestelmän mittaamiseen, sillä menetelmää ei ole aiemmin sovellettu vastaavassa tapauksessa. Ensimmäiset tulokset osoittivat ylipäätään liiallisen korkeita tehokkuuslukuja. Korjaustoimenpiteitä erottelutarkkuuden parantamiseksi esiteltiin ja sovellettiin, jonka jälkeen saatiin realistisempia tuloksia ja ranking-lista arvioitavista seuduista. DEA-menetelmän todettiin olevan tehokas ja kiinnostava työkalu arviointikäytäntöjen ja innovaatiopolitiikan kehittämiseen, sikäli kun datan saatavuusongelmat saadaan ratkaistua sekä itse mallia tarkennettua.
Resumo:
Tutkielman tavoitteena on tutkia, mikä olisi parhaiten case-yritykselle sopiva menetelmä tulla tekemään kauppaa ulkomaan markkinoille. Kaikki yleiset kansainvälisille markkinoilletulomenetelmät esitetään ja niiden edut ja haitat tuodaan esille. Selvittäessä tehtävänantajayrityksen resurssit, odotukset ja vaatimukset todetaan, että yhteistyössä tehtävä markkinoilletulo on pätevin vaihtoehto. Tämän jälkeen valitaan parhaiten tarkoitukseen sopiva yritys ennalta valitusta yritysvaihtoehtojen ryhmästä ja testataan tämän yrityksen yhteistyösopivuus case-yrityksen kanssa. Yritysten välinen yhteistyösopivuus arvioidaan analysoimalla yritykset haastattelujen avulla ja tutkielmassa esitettyjen teorioiden avulla. Sopivuus todetaan hyväksi, kattaen 71 prosenttia analysoiduista kohdista. Kaksikymmentäyhdeksän prosenttia kohdista todetaan kohdiksi, joissa yritysten välinen yhteisymmärrys ei ole toimeksiantajayrityksen minimivaatimukset täyttävää. Näitä kohtia tullaan käyttämään suunnittelun pohjana kun suunnitellaan jatkoneuvotteluja yhteistyön käynnistämiseksi.
Resumo:
Virtual screening is a central technique in drug discovery today. Millions of molecules can be tested in silico with the aim to only select the most promising and test them experimentally. The topic of this thesis is ligand-based virtual screening tools which take existing active molecules as starting point for finding new drug candidates. One goal of this thesis was to build a model that gives the probability that two molecules are biologically similar as function of one or more chemical similarity scores. Another important goal was to evaluate how well different ligand-based virtual screening tools are able to distinguish active molecules from inactives. One more criterion set for the virtual screening tools was their applicability in scaffold-hopping, i.e. finding new active chemotypes. In the first part of the work, a link was defined between the abstract chemical similarity score given by a screening tool and the probability that the two molecules are biologically similar. These results help to decide objectively which virtual screening hits to test experimentally. The work also resulted in a new type of data fusion method when using two or more tools. In the second part, five ligand-based virtual screening tools were evaluated and their performance was found to be generally poor. Three reasons for this were proposed: false negatives in the benchmark sets, active molecules that do not share the binding mode, and activity cliffs. In the third part of the study, a novel visualization and quantification method is presented for evaluation of the scaffold-hopping ability of virtual screening tools.
Resumo:
The age-old adage goes that nothing in this world lasts but change, and this generation has indeed seen changes that are unprecedented. Business managers do not have the luxury of going with the flow: they have to plan ahead, to think strategies that will meet the changing conditions, however stormy the weather seems to be. This demand raises the question of whether there is something a manager or planner can do to circumvent the eye of the storm in the future? Intuitively, one can either run on the risk of something happening without preparing, or one can try to prepare oneself. Preparing by planning for each eventuality and contingency would be impractical and prohibitively expensive, so one needs to develop foreknowledge, or foresight past the horizon of the present and the immediate future. The research mission in this study is to support strategic technology management by designing an effective and efficient scenario method to induce foresight to practicing managers. The design science framework guides this study in developing and evaluating the IDEAS method. The IDEAS method is an electronically mediated scenario method that is specifically designed to be an effective and accessible. The design is based on the state-of-the-art in scenario planning, and the product is a technology-based artifact to solve the foresight problem. This study demonstrates the utility, quality and efficacy of the artifact through a multi-method empirical evaluation study, first by experimental testing and secondly through two case studies. The construction of the artifact is rigorously documented as justification knowledge as well as the principles of form and function on the general level, and later through the description and evaluation of instantiations. This design contributes both to practice and foundation of the design. The IDEAS method contributes to the state-of-the-art in scenario planning by offering a light-weight and intuitive scenario method for resource constrained applications. Additionally, the study contributes to the foundations and methods of design by forging a clear design science framework which is followed rigorously. To summarize, the IDEAS method is offered for strategic technology management, with a confident belief that it will enable gaining foresight and aid the users to choose trajectories past the gales of creative destruction and off to a brighter future.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Investment decision-making on far-reaching innovation ideas is one of the key challenges practitioners and academics face in the field of innovation management. However, the management practices and theories strongly rely on evaluation systems that do not fit in well with this setting. These systems and practices normally cannot capture the value of future opportunities under high uncertainty because they ignore the firm’s potential for growth and flexibility. Real options theory and options-based methods have been offered as a solution to facilitate decision-making on highly uncertain investment objects. Much of the uncertainty inherent in these investment objects is attributable to unknown future events. In this setting, real options theory and methods have faced some challenges. First, the theory and its applications have largely been limited to market-priced real assets. Second, the options perspective has not proved as useful as anticipated because the tools it offers are perceived to be too complicated for managerial use. Third, there are challenges related to the type of uncertainty existing real options methods can handle: they are primarily limited to parametric uncertainty. Nevertheless, the theory is considered promising in the context of far-reaching and strategically important innovation ideas. The objective of this dissertation is to clarify the potential of options-based methodology in the identification of innovation opportunities. The constructive research approach gives new insights into the development potential of real options theory under non-parametric and closeto- radical uncertainty. The distinction between real options and strategic options is presented as an explanans for the discovered limitations of the theory. The findings offer managers a new means of assessing future innovation ideas based on the frameworks constructed during the course of the study.
Resumo:
The drug discovery process is facing new challenges in the evaluation process of the lead compounds as the number of new compounds synthesized is increasing. The potentiality of test compounds is most frequently assayed through the binding of the test compound to the target molecule or receptor, or measuring functional secondary effects caused by the test compound in the target model cells, tissues or organism. Modern homogeneous high-throughput-screening (HTS) assays for purified estrogen receptors (ER) utilize various luminescence based detection methods. Fluorescence polarization (FP) is a standard method for ER ligand binding assay. It was used to demonstrate the performance of two-photon excitation of fluorescence (TPFE) vs. the conventional one-photon excitation method. As result, the TPFE method showed improved dynamics and was found to be comparable with the conventional method. It also held potential for efficient miniaturization. Other luminescence based ER assays utilize energy transfer from a long-lifetime luminescent label e.g. lanthanide chelates (Eu, Tb) to a prompt luminescent label, the signal being read in a time-resolved mode. As an alternative to this method, a new single-label (Eu) time-resolved detection method was developed, based on the quenching of the label by a soluble quencher molecule when displaced from the receptor to the solution phase by an unlabeled competing ligand. The new method was paralleled with the standard FP method. It was shown to yield comparable results with the FP method and found to hold a significantly higher signal-tobackground ratio than FP. Cell-based functional assays for determining the extent of cell surface adhesion molecule (CAM) expression combined with microscopy analysis of the target molecules would provide improved information content, compared to an expression level assay alone. In this work, immune response was simulated by exposing endothelial cells to cytokine stimulation and the resulting increase in the level of adhesion molecule expression was analyzed on fixed cells by means of immunocytochemistry utilizing specific long-lifetime luminophore labeled antibodies against chosen adhesion molecules. Results showed that the method was capable of use in amulti-parametric assay for protein expression levels of several CAMs simultaneously, combined with analysis of the cellular localization of the chosen adhesion molecules through time-resolved luminescence microscopy inspection.
Resumo:
Aims: This study was carried out to investigate the usefulness of acoustic rhinometry in the evaluation of intranasal dimensions in children. The aim was to define reference values for school children. In addition, the role of the VAS scale in the subjective evaluation of nasal obstruction in children was studied. Materials and methods: Measurements were done with Acoustic Rhinometry A1. The values of special interest were the minimal cross-sectional area (MCA) and the anterior volume of the nose (VOL). The data for reference values included 124 voluntary school children with no permanent nasal symptoms, aged between 7 and 14 years. Data were collected at baseline and after decongestion of the nose; the VAS scale was filled in before measurements. The subjects in the follow-up study (n=74, age between 1 and 12 years) were receiving intranasal spray of insulin or placebo. The nasal symptoms were recorded and acoustic rhinometry was measured at each control visit. Results: In school children, the mean total MCA was 0.752 cm2 (SD 0.165), and the mean total VOL was 4.00 cm3 (SD 0.63) at baseline. After decongestion, a significant increase in the mean TMCA and in the mean TVOL was found. A correlation was found between TMCA and age, and between TVOL and height of a child. There was no difference between boys and girls. A correlation was found between unilateral acoustic values and VAS at baseline, but not after decongestion. No difference wasfound in acoustic values or symptoms between the insulin and placebo group in the follow-up study of two years. Conclusions: Acoustic rhinometry is a suitable objective method to examine intranasal dimensions in children. It is easy to perform and well tolerated. Reference values for children between 7 and 14 years were established.