15 resultados para New applications

em Helda - Digital Repository of University of Helsinki


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the recent increase in interest in service-oriented architectures (SOA) and Web services, developing applications with the Web services paradigm has become feasible. Web services are self-describing, platform-independent computational elements. New applications can be assembled from a set of previously created Web services, which are composed together to make a service that uses its components to perform a certain task. This is the idea of service composition. To bring service composition to a mobile phone, I have created Interactive Service Composer for mobile phones. With Interactive Service Composer, the user is able to build service compositions on his mobile phone, consisting of Web services or services that are available from the mobile phone itself. The service compositions are reusable and can be saved in the phone's memory. Previously saved compositions can also be used in new compositions. While developing applications for mobile phones has been possible for some time, the usability of the solutions is not the same as when developing for desktop computers. When developing for mobile phones, the developer has to more carefully consider the decisions he is going to make with the program he is developing. With the lack of processing power and memory, the applications cannot function as well as on desktop PCs. On the other hand, this does not remove the appeal of developing applications for mobile devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Metsäteollisuudesta kertyy vuosittain suuria määriä ylijäämämateriaalia, kuten puun kuorta ja oksia.Ylimääräinen aines käytetään pääasiassa energiantuotantoon, mutta uusia soveltamismahdollisuuksia kaivataan. Kuoren on havaittu olevan potentiaalinen lähde monille bioaktiivisille yhdisteille, joille olisi käyttöä esimerkiksi lääke- ja kemianteollisuudessa sekä maa-, metsä- ja puutarhatuotannon tuholaistorjunnassa. Tutkimus on osa Euroopan Unionin rahoittamaa ForestSpeCs-projektia, jonka tarkoituksena on selvittää metsäteollisuuden ylijäämämateriaalien vaihtoehtoisia käyttötapoja. Valittujen kymmenen teollisesti merkittävän pohjoisen puulajin (Abies nephrolepis, Betula pendula, Larix decidua, L. gmelinii, L. sibirica, Picea abies, P. ajanensis, P. pumila, Pinus sylvestris, Populus tremula) kuoresta uutettujen aineiden soveltuvuutta syönninestoaineeksi testattiin kaaliperhosen (Pieris brassicae L.) ja krysanteemiyökkösen (Spodoptera littoralis Boisduval) toukilla sekä osittain sinappikuoriaisella (Phaedon cochloreae Fabricius) ja idänlehtikuoriaisella (Agelastica alni L.). Uutteet valmistettiin yhteistyössä projektin ryhmien avulla tai itsenäisesti erilaisin menetelmin. Testaukset tehtiin laboratorio-oloissa käyttäen lehtikiekkojen valintabiotestiä sekä karkeilla uutteilla että niistä erotelluilla yksittäisillä yhdisteillä. Tehdyistä mittauksista laskettiin syönninestoindeksit (FDI). Tulosten perusteella lähes kaikki testatut uutteet vaikuttivat ainakin jossain määrin kohdehyönteisen syöntikäyttäytymiseen. Hieman yli puolet kaaliperhosella testatuista 46 uutteesta aiheuttivat yli 50 % syönnineston eli kaaliperhonen suosi kontrollilehtiä uutteella käsiteltyjä todennäköisemmin. Krysanteemiyökkösellä yli 50 %:n syönnineston aiheuttivat vain seitsemän testatuista 56 uutteesta. Lisäksi kolme uutetta lisäsi käsiteltyjen kiekkojen syöntiä merkittävästi. Idänlehtikuoriaistoukat ja -aikuiset karttoivat erityisesti abietiinihapolla käsiteltyjä lehtiä. Sinappikuoriaisella testatut uutteet toimivat myös lupaavasti. Testattujen puulajien kuoresta on mahdollista uuttaa biologisesti aktiivisia yhdisteitä, mutta tuholaistorjunnan kannalta oikeiden pitoisuuksien ja tehokkaiden uuttomenetelmien löytäminen vaatii jatkotutkimuksia. Kuoren sisältämien yhdisteiden laatu ja määrä vaihtelevat monien tekijöiden, kuten ympäristön ja genetiikan vaikutuksesta. Hyönteisten sietokyky vaihtelee myös paljon lajeittain ja yksilöidenkin välillä on eroja. Uutteista valmistettavia torjunta-aineita olisi kuitenkin mahdollista sisällyttää esimerkiksi integroituun torjuntaan muiden menetelmien rinnalle tulevaisuudessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even though cellulose is the most abundant polymer on Earth, its utilisation has some limitations regarding its efficient use in the production of bio-based materials. It is quite clear from statistics that only a relatively small fraction of cellulose is used for the production of commodity materials and chemicals. This fact was the driving force in our research into understanding, designing, synthesising and finding new alternative applications for this well-known but underused biomaterial. This thesis focuses on the developing advanced materials and products from cellulose by using novel approaches. The aim of this study was to investigate and explore the versatility of cellulose as a starting material for the synthesis of cellulose-based materials, to introduce new synthetic methods for cellulose modification, and to widen the already existing synthetic approaches. Due to the insolubility of cellulose in organic solvents and in water, ionic liquids were applied extensively as the reaction media in the modification reactions. Cellulose derivatives were designed and fine-tuned to obtain desired properties. This was done by altering the inherent hydrogen bond network by introducing different substituents. These substituents either prevented spontaneous formation of hydrogen bonding completely or created new interactions between the cellulose chains. This enabled spontaneous self-assembly leading to supramolecular structures. It was also demonstrated that the material properties of cellulose can be modified even those molecules with a low degree of substitution when highly hydrophobic films and aerogels were prepared from fatty acid derivatives of nanocellulose. Development towards advanced cellulose-based materials was demostrated by synthesising chlorophyllcellulose derivatives that showed potential in photocurrent generation systems. In addition, liquid crystalline cellulose derivatives prepared in this study, showed to function as UV-absorbers in paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future use of genetically modified (GM) plants in food, feed and biomass production requires a careful consideration of possible risks related to the unintended spread of trangenes into new habitats. This may occur via introgression of the transgene to conventional genotypes, due to cross-pollination, and via the invasion of GM plants to new habitats. Assessment of possible environmental impacts of GM plants requires estimation of the level of gene flow from a GM population. Furthermore, management measures for reducing gene flow from GM populations are needed in order to prevent possible unwanted effects of transgenes on ecosystems. This work develops modeling tools for estimating gene flow from GM plant populations in boreal environments and for investigating the mechanisms of the gene flow process. To describe spatial dimensions of the gene flow, dispersal models are developed for the local and regional scale spread of pollen grains and seeds, with special emphasis on wind dispersal. This study provides tools for describing cross-pollination between GM and conventional populations and for estimating the levels of transgenic contamination of the conventional crops. For perennial populations, a modeling framework describing the dynamics of plants and genotypes is developed, in order to estimate the gene flow process over a sequence of years. The dispersal of airborne pollen and seeds cannot be easily controlled, and small amounts of these particles are likely to disperse over long distances. Wind dispersal processes are highly stochastic due to variation in atmospheric conditions, so that there may be considerable variation between individual dispersal patterns. This, in turn, is reflected to the large amount of variation in annual levels of cross-pollination between GM and conventional populations. Even though land-use practices have effects on the average levels of cross-pollination between GM and conventional fields, the level of transgenic contamination of a conventional crop remains highly stochastic. The demographic effects of a transgene have impacts on the establishment of trangenic plants amongst conventional genotypes of the same species. If the transgene gives a plant a considerable fitness advantage in comparison to conventional genotypes, the spread of transgenes to conventional population can be strongly increased. In such cases, dominance of the transgene considerably increases gene flow from GM to conventional populations, due to the enhanced fitness of heterozygous hybrids. The fitness of GM plants in conventional populations can be reduced by linking the selectively favoured primary transgene to a disfavoured mitigation transgene. Recombination between these transgenes is a major risk related to this technique, especially because it tends to take place amongst the conventional genotypes and thus promotes the establishment of invasive transgenic plants in conventional populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary genetics incorporates traditional population genetics and studies of the origins of genetic variation by mutation and recombination, and the molecular evolution of genomes. Among the primary forces that have potential to affect the genetic variation within and among populations, including those that may lead to adaptation and speciation, are genetic drift, gene flow, mutations and natural selection. The main challenges in knowing the genetic basis of evolutionary changes is to distinguish the adaptive selection forces that cause existent DNA sequence variants and also to identify the nucleotide differences responsible for the observed phenotypic variation. To understand the effects of various forces, interpretation of gene sequence variation has been the principal basis of many evolutionary genetic studies. The main aim of this thesis was to assess different forms of teleost gene sequence polymorphisms in evolutionary genetic studies of Atlantic salmon (Salmo salar) and other species. Firstly, the level of Darwinian adaptive evolution affected coding regions of the growth hormone (GH) gene during the teleost evolution was investigated based on the sequence data existing in public databases. Secondly, a target gene approach was used to identify within population variation in the growth hormone 1 (GH1) gene in salmon. Then, a new strategy for single nucleotide polymorphisms (SNPs) discovery in salmonid fishes was introduced, and, finally, the usefulness of a limited number of SNP markers as molecular tools in several applications of population genetics in Atlantic salmon was assessed. This thesis showed that the gene sequences in databases can be utilized to perform comparative studies of molecular evolution, and some putative evidence of the existence of Darwinian selection during the teleost GH evolution was presented. In addition, existent sequence data was exploited to investigate GH1 gene variation within Atlantic salmon populations throughout its range. Purifying selection is suggested to be the predominant evolutionary force controlling the genetic variation of this gene in salmon, and some support for gene flow between continents was also observed. The novel approach to SNP discovery in species with duplicated genome fragments introduced here proved to be an effective method, and this may have several applications in evolutionary genetics with different species - e.g. when developing gene-targeted markers to investigate quantitative genetic variation. The thesis also demonstrated that only a few SNPs performed highly similar signals in some of the population genetic analyses when compared with the microsatellite markers. This may have useful applications when estimating genetic diversity in genes having a potential role in ecological and conservation issues, or when using hard biological samples in genetic studies as SNPs can be applied with relatively highly degraded DNA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The description of quarks and gluons, using the theory of quantum chromodynamics (QCD), has been known for a long time. Nevertheless, many fundamental questions in QCD remain unanswered. This is mainly due to problems in solving the theory at low energies, where the theory is strongly interacting. AdS/CFT is a duality between a specific string theory and a conformal field theory. Duality provides new tools to solve the conformal field theory in the strong coupling regime. There is also some evidence that using the duality, one can get at least qualitative understanding of how QCD behaves at strong coupling. In this thesis, we try to address some issues related to QCD and heavy ion collisions, applying the duality in various ways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report has been written as part of the E-ruralnet –project that addresses e-learning as a means for enhancing lifelong learning opportunities in rural areas, with emphasis on SMEs, micro-enterprises, self-employed and persons seeking employment. E-ruralnet is a European network project part-funded by the European Commission in the context of the Lifelong Learning Programme, Transversal projects-ICT. This report aims to address two issues identified as requiring attention in the previous Observatory study: firstly, access to e-learning for rural areas that have not adequate ICT infrastructure; and secondly new learning approaches introduced through new interactive ICT tools such as web 2.0., wikis, podcasts etc. The possibility of using alternative technology in addition to computers is examined (mobile telephones, DVDs) as well as new approaches to learning (simulation, serious games). The first part of the report examines existing literature on e-learning and what e-learning is all about. Institutional users, learners and instructors/teachers are all looked at separately. We then turn to the implementation of e-learning from the organizational point of view and focus on quality issues related to e-learning. The report includes a separate chapter or e-learning from the rural perspective since most of Europe is geographically speaking rural and the population in those areas is that which could most benefit from the possibilities introduced by the e-learning development. The section titled “Alternative media”, in accordance with the project terminology, looks at standalone technology that is of particular use to rural areas without proper internet connection. It also evaluates the use of new tools and media in e-learning and takes a look at m-learning. Finally, the use of games, serious games and simulations in learning is considered. Practical examples and cases are displayed in a box to facilitate pleasant reading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inelastic x-ray scattering spectroscopy is a versatile experimental technique for probing the electronic structure of materials. It provides a wealth of information on the sample's atomic-scale structure, but extracting this information from the experimental data can be challenging because there is no direct relation between the structure and the measured spectrum. Theoretical calculations can bridge this gap by explaining the structural origins of the spectral features. Reliable methods for modeling inelastic x-ray scattering require accurate electronic structure calculations. This work presents the development and implementation of new schemes for modeling the inelastic scattering of x-rays from non-periodic systems. The methods are based on density functional theory and are applicable for a wide variety of molecular materials. Applications are presented in this work for amorphous silicon monoxide and several gas phase systems. Valuable new information on their structure and properties could be extracted with the combination of experimental and computational methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the current status and some open problems of noncommutative quantum field theory are reviewed. The introduction aims to put these theories in their proper context as a part of the larger program to model the properties of quantized space-time. Throughout the thesis, special focus is put on the role of noncommutative time and how its nonlocal nature presents us with problems. Applications in scalar field theories as well as in gauge field theories are presented. The infinite nonlocality of space-time introduced by the noncommutative coordinate operators leads to interesting structure and new physics. High energy and low energy scales are mixed, causality and unitarity are threatened and in gauge theory the tools for model building are drastically reduced. As a case study in noncommutative gauge theory, the Dirac quantization condition of magnetic monopoles is examined with the conclusion that, at least in perturbation theory, it cannot be fulfilled in noncommutative space.