24 resultados para Filmic approach methods
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
This thesis introduces a practice-theoretical approach to understanding customer value formation to be used in the field of service marketing and management. In contrast to current studies trying to understand value formation by analysing customers as independent actors and thinkers, it is in this work suggested that customer value formation can be better understood by analysing how value is formed in the practices and contexts of the customers. The theoretical approach developed in this thesis is applied in an empirical study of family cruises. The theoretical analysis in this thesis results in a new approach for understanding customer value formation. Customer value is, according to this new approach, something that is formed in practice, meaning that value is formed in constellations of the customer and contextual elements like tools, physical spaces and contextually embedded images and know-how. This view is different from the current views that tend to see value as subjectively created, co-created, perceived or experienced by the customer. The new approach has implications on how we view customer value, but also on the methods and techniques we can use to understand customer value in empirical studies. It is also suggested that services could in fact be reconceptualised as practices. According to the stance presented in this thesis the empirical analysis of customer value should not focus on individual customers, but should instead take the contextual entity of practices as its unit of analysis. Therefore, ethnography is chosen as a method for exploring how customer value is formed in practice in the case of family cruises on a specific cruise vessel. The researcher has studied six families, as well as the context of the cruise vessel with various techniques including non-participant observation, participant observation and interviews in order to create an ethnographic understanding of the practices carried out on board. Twenty-one different practices are reported and discussed in order to provide necessary insight to customer value formation that can be used as input for service development.
Resumo:
Irritable bowel syndrome (IBS) is a common multifactorial functional intestinal disorder, the pathogenesis of which is not completely understood. Increasing scientific evidence suggests that microbes are involved in the onset and maintenance of IBS symptoms. The microbiota of the human gastrointestinal (GI) tract constitutes a massive and complex ecosystem consisting mainly of obligate anaerobic microorganisms making the use of culture-based methods demanding and prone to misinterpretation. To overcome these drawbacks, an extensive panel of species- and group-specific assays for an accurate quantification of bacteria from fecal samples with real-time PCR was developed, optimized, and validated. As a result, the target bacteria were detectable at a minimum concentration range of approximately 10 000 bacterial genomes per gram of fecal sample, which corresponds to the sensitivity to detect 0.000001% subpopulations of the total fecal microbiota. The real-time PCR panel covering both commensal and pathogenic microorganisms was assessed to compare the intestinal microbiota of patients suffering from IBS with a healthy control group devoid of GI symptoms. Both the IBS and control groups showed considerable individual variation in gut microbiota composition. Sorting of the IBS patients according to the symptom subtypes (diarrhea, constipation, and alternating predominant type) revealed that lower amounts of Lactobacillus spp. were present in the samples of diarrhea predominant IBS patients, whereas constipation predominant IBS patients carried increased amounts of Veillonella spp. In the screening of intestinal pathogens, 17% of IBS samples tested positive for Staphylococcus aureus, whereas no positive cases were discovered among healthy controls. Furthermore, the methodology was applied to monitor the effects of a multispecies probiotic supplementation on GI microbiota of IBS sufferers. In the placebo-controlled double-blind probiotic intervention trial of IBS patients, each supplemented probiotic strain was detected in fecal samples. Intestinal microbiota remained stable during the trial, except for Bifidobacterium spp., which increased in the placebo group and decreased in the probiotic group. The combination of assays developed and applied in this thesis has an overall coverage of 300-400 known bacterial species, along with the number of yet unknown phylotypes. Hence, it provides good means for studying the intestinal microbiota, irrespective of the intestinal condition and health status. In particular, it allows screening and identification of microbes putatively associated with IBS. The alterations in the gut microbiota discovered here support the hypothesis that microbes are likely to contribute to the pathophysiology of IBS. The central question is whether the microbiota changes described represent the cause for, rather than the effect of, disturbed gut physiology. Therefore, more studies are needed to determine the role and importance of individual microbial species or groups in IBS. In addition, it is essential that the microbial alterations observed in this study will be confirmed using a larger set of IBS samples of different subtypes, preferably from various geographical locations.
Resumo:
The Hodgkin and Huxley (HH) model of action potential has become a central paradigm of neuroscience. Despite its ability to predict action potentials with remarkable accuracy, it fails to explain several biophysical findings related to the initiation and propagation of the nerve impulse. The isentropic heat release and optical phenomena demonstrated by various experiments suggest that action potential is accompanied by a transient phase change in the axonal membrane. In this study a method was developed for preparing a giant axon from the crayfish abdominal cord for studying the molecular mechanisms of action potential simultaneously by electrophysiological and optical methods. Also an alternative setup using a single-cell culture of an Aplysia sensory neuron is presented. In addition to the description of the method, the preliminary results on the effect of phloretin, a dipole potential lowering compound, on the excitability of a crayfish giant axon are presented.
Resumo:
Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.
Resumo:
In the context of health care, information technology (IT) has an important role in the operational infrastructure, ranging from business management to patient care. An essential part of the system is medication management in inpatient and outpatient care. Community pharmacists strategy has been to extend practice responsibilities beyond dispensing towards patient care services. Few studies have evaluated the strategic development of IT systems to support this vision. The objectives of this study were to assess and compare independent Finnish community pharmacy owners and staff pharmacists priorities concerning the content and structure of the next generation of community pharmacy IT systems, to explore international experts visions and strategic views on IT development needs in relation to services provided in community pharmacies, to identify IT innovations facilitating patient care services and to evaluate their development and implementation processes, and to assess community pharmacists readiness to adopt innovations. This study applied both qualitative and quantitative methods. A qualitative personal interview of 14 experts in community pharmacy services and related IT from eight countries and a national survey of Finnish community pharmacy owners (mail survey, response rate 53%, n=308), and of a representative sample of staff pharmacists (online survey, response rate 22%, n=373) were conducted. Finnish independent community pharmacy owners gave priority to logistical functions but also to those related to medication information and patient care. The managers and staff pharmacists have different views of the importance of IT features, reflecting their different professional duties in the community pharmacy. This indicates the need for involving different occupation groups in planning the new IT systems for community pharmacies. A majority of the international experts shared the vision of community pharmacy adopting a patient care orientation; supported by IT-based documentation, new technological solutions, access to information, and shared patient data. Community pharmacy IT innovations were rare, which is paradoxical because owners and staff pharmacists perception of their innovativeness was seen as being high. Community pharmacy IT systems development processes usually had not undergone systematic needs assessment research beforehand or evaluation after the implementation and were most often coordinated by national governments without subsequent commercialization. Specifically, community pharmacy IT developments lack research, organization, leadership and user involvement in the process. Those responsible for IT development in the community pharmacy sector should create long-term IT development strategies that are in line with community pharmacy service development strategies. This could provide systematic guidance for future projects to ensure that potential innovations are based on a sufficient understanding of pharmacy practice problems that they are intended to solve, and to encourage strong leadership in research, development of innovations so that community pharmacists potential innovativeness is used, and that professional needs and strategic priorities will be considered even if the development process is led by those outside the profession.
Resumo:
This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.