16 resultados para denis mizne
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Esko Rahikainen ja Timo Kaitaro
Resumo:
Numerical computation of a viscid heat-conducting transonic flow over a generic commercial rocket profile with symmetric oversized nose part was carried out. It has been shown that at zero angle of attack for some free-streamvelocity value flow pattern loses its symmetry. This results in non-uniform pressure distribution on rocket surface in angle direction which may yield in additional oscillating stress on the rocket. Also it has been found that obtained non-symmetric flow patterns are stable for small velocity perturbations.
Resumo:
Magnetic field dependencies of Hall coefficient and magnetoresistivity are investigated in classical and quantizing magnetic fields in p-Bi2Te3 crystals heavily doped with Sn grown by Czochralsky method. Magnetic field was parallel to the trigonal axis C3. Shubnikov-de Haas effect and quantum oscillations of the Hall coefficient were measured at temperatures 4.2 K and 11 K. On the basis of the magnetic field dependence of the Hall coefficient a method of estimation of the Hall factor and Hall mobility using the Drabble- Wolf six ellipsoid model is proposed. Shubnikov-de Haas effect and quantum oscillations of the Hall coefficient were observed at 4.2 K and 11 K. New evidence for the existence of the narrow band of Sn impurity states was shown. This band is partly filled by electrons and it is overlapping with the valence states of the light holes. Parameters of the impurity states, their energy ESn - 15 meV, band broadening ¿<< k0T and localization radius of the impuritystate R - 30 Å were obtained.
Resumo:
Tässä tutkimuksessa kehitettiin prototyyppi betonielementin dimension mittaus järjestelmästä. Tämä järjestelmä mahdollistaa kolmiulotteisen kappaleen mittauksen. Tutkimuksessa kehitettiin myös stereonäköön perustuva kappaleen mittaus. Prototyyppiä testailin ja tulokset osoittautuivat luotettaviksi. Tutkimuksessa selvitetään ja vertaillaan myös muita lähestymistapoja ja olemassa olevia järjestelmiä kappaleen kolmiuloitteiseen mittaukseen, joita Suomalaiset yhtiöt käyttävät tällä alalla.
Resumo:
Muokatun matriisi-geometrian tekniikan kehitys yleimmäksi jonoksi on esitelty tässä työssä. Jonotus systeemi koostuu useista jonoista joilla on rajatut kapasiteetit. Tässä työssä on myös tutkittu PH-tyypin jakautumista kun ne jaetaan. Rakenne joka vastaa lopullista Markovin ketjua jossa on itsenäisiä matriiseja joilla on QBD rakenne. Myös eräitä rajallisia olotiloja on käsitelty tässä työssä. Sen esitteleminen matriisi-geometrisessä muodossa, muokkaamalla matriisi-geometristä ratkaisua on tämän opinnäytetyön tulos.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
Kirjallisuusarvostelu
Resumo:
This work is devoted to the study of the dynamical and structural properties of dendrimers. Different approaches were used: analytical theory, computer simulation results and experimental NMR studies. The theory of the relaxation spectrum of dendrimer macromolecules was developed. Relaxation processes which are manifest in the local orientational mobility of dendrimer macromolecules were established and studied in detail. Theoretical results and conclusions were used for experimental studies of carbosilane dendimers.
Resumo:
Kommunikaatio on kaikista asioista ihmeellisin, kirjoitti amerikkalainen filosofi John Dewey vuonna 1925. Sen jälkeen tapahtunut kommunikaatioteknologian kiihtyvä kehitys on ehkä vielä ihmeellisempää. Tarkastelun kohteena on kommunikaatioteknologian kehitys ja sen aikaansaama median murros. Tämän kehityksen tärkein ilmentymä on Internet, joka kuvastaa ehkä parhaiten informaatioyhteiskunnan olemusta. Tutkielmassa käydään läpi Internetin asemaa julkisena keskusteluareenana, sen tekijänoikeuskysymyksiä ja Internetin merkitystä massamedialle sekä sosiaalista mediaa kommunikaatioteorioiden näkökulmasta. Ensimmäisenä tutkimusongelmana on arvioida näitä piirteitä hollantilaisen mediatutkijan Denis McQuailin jaottelemien media-yhteiskuntateorioiden kautta. Tutkimuksen toisena ongelmana ja empiirisenä osuutena on WikiLeaks-vuotosivuston julkisuuskuva, jota tarkastellaan printtimedian asennoitumisen kautta. Ensimmäisen tutkimusongelman ratkaisumenetelmänä on kattava teoreettisen kirjallisuuden käsittely, jossa käydään läpi merkittävimmät kommunikaatiotutkimukseen liittyvät historialliset teoriat ja niiden paradigmat sekä arvioidaan Internetin eri ilmiöitä näiden teorioiden ja uusimpien tieteellisten artikkelien pohjalta. Toisen tutkimusongelman ratkaisumenetelmänä on yksinkertainen semioottinen analyysi, jossa aineistona on 54 eurooppalaisen ja pohjoisamerikkalaisen sanomalehti- ja aikakausjulkaisun pääkirjoitusta ja kolumnia. Tutkielman teoreettisessa osuudessa tullaan johtopäätökseen, jonka mukaan vanhat media-yhteiskuntateoriat selittävät edelleen melko hyvin Internetin aikaansaamaa muutosta sekä henkilökohtaisessa viestinnässä että massakommunikaatiossa. Esimerkiksi sosiaalisen median suhteen sosiaalisen konstruktionismin symbolinen interaktionismi -paradigma tuntuu selitysvoimaiselta. Empiirisessä osuudessa tutkielman lähtökohta-oletuksena on, että vasemmistolaisten ja vasemmistoliberaalien julkaisujen suhtautuminen vuotosivustoon on positiivisempi kuin liberaalikonservatiivisten ja arvokonservatiivisten julkaisujen. Aineiston analyysi osoittaa lähtökohtaoletuksen enimmäkseen paikkansapitäväksi.
Resumo:
This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.