969 resultados para Consistent term structure models
Resumo:
Background: The ESTRO Health Economics in Radiation Oncology (HERO) project has the overall aim to develop a knowledge base of the provision of radiotherapy in Europe and build a model for health economic evaluation of radiation treatments at the European level. The first milestone was to assess the availability of radiotherapy resources within Europe. This paper presents the personnel data collected in the ESTRO HERO database. Materials and methods: An 84-item questionnaire was sent out to European countries, through their national scientific and professional radiotherapy societies. The current report includes a detailed analysis of radiotherapy staffing (questionnaire items 4760), analysed in relation to the annual number of treatment courses and the socio-economic status of the countries. The analysis was conducted between February and July 2014, and is based on validated responses from 24 of the 40 European countries defined by the European Cancer Observatory (ECO). Results: A large variation between countries was found for most parameters studied. Averages and ranges for personnel numbers per million inhabitants are 12.8 (2.530.9) for radiation oncologists, 7.6 (019.7) for medical physicists, 3.5 (012.6) for dosimetrists, 26.6 (1.978) for RTTs and 14.8 (0.461.0) for radiotherapy nurses. The combined average for physicists and dosimetrists is 9.8 per million inhabitants and 36.9 for RTT and nurses. Radiation oncologists on average treat 208.9 courses per year (range: 99.9348.8), physicists and dosimetrists conjointly treat 303.3 courses (range: 85757.7) and RTT and nurses 76.8 (range: 25.7156.8). In countries with higher GNI per capita, all personnel categories treat fewer courses per annum than in less affluent countries. This relationship is most evident for RTTs and nurses. Different clusters of countries can be distinguished on the basis of available personnel resources and socio-economic status. Conclusions: The average personnel figures in Europe are now consistent with, or even more favourable than the QUARTS recommendations, probably reflecting a combination of better availability as such, in parallel with the current use of more complex treatments than a decade ago. A considerable variation in available personnel and delivered courses per year however persists among the highest and lowest staffing levels. This not only reflects the variation in cancer incidence and socio-economic determinants, but also the stage in technology adoption along with treatment complexity and the different professional roles and responsibilities within each country. Our data underpin the need for accurate prediction models and long-term education and training programmes
Resumo:
Electron transport in a self-consistent potential along a ballistic two-terminal conductor has been investigated. We have derived general formulas which describe the nonlinear current-voltage characteristics, differential conductance, and low-frequency current and voltage noise assuming an arbitrary distribution function and correlation properties of injected electrons. The analytical results have been obtained for a wide range of biases: from equilibrium to high values beyond the linear-response regime. The particular case of a three-dimensional Fermi-Dirac injection has been analyzed. We show that the Coulomb correlations are manifested in the negative excess voltage noise, i.e., the voltage fluctuations under high-field transport conditions can be less than in equilibrium.
Resumo:
Social, technological, and economic time series are divided by events which are usually assumed to be random, albeit with some hierarchical structure. It is well known that the interevent statistics observed in these contexts differs from the Poissonian profile by being long-tailed distributed with resting and active periods interwoven. Understanding mechanisms generating consistent statistics has therefore become a central issue. The approach we present is taken from the continuous-time random-walk formalism and represents an analytical alternative to models of nontrivial priority that have been recently proposed. Our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions. We here analyze the intertransaction time intervals of several financial markets. We observe that empirical data describe a subtle multifractal behavior. Our model explains this structure by taking the pausing-time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks. A stretched exponential kernel provides a multifractal profile valid for a certain limited range. A suggested heuristic analytical profile is capable of covering a broader region.
Resumo:
In general, models of ecological systems can be broadly categorized as ’top-down’ or ’bottom-up’ models, based on the hierarchical level that the model processes are formulated on. The structure of a top-down, also known as phenomenological, population model can be interpreted in terms of population characteristics, but it typically lacks an interpretation on a more basic level. In contrast, bottom-up, also known as mechanistic, population models are derived from assumptions and processes on a more basic level, which allows interpretation of the model parameters in terms of individual behavior. Both approaches, phenomenological and mechanistic modelling, can have their advantages and disadvantages in different situations. However, mechanistically derived models might be better at capturing the properties of the system at hand, and thus give more accurate predictions. In particular, when models are used for evolutionary studies, mechanistic models are more appropriate, since natural selection takes place on the individual level, and in mechanistic models the direct connection between model parameters and individual properties has already been established. The purpose of this thesis is twofold. Firstly, a systematical way to derive mechanistic discrete-time population models is presented. The derivation is based on combining explicitly modelled, continuous processes on the individual level within a reproductive period with a discrete-time maturation process between reproductive periods. Secondly, as an example of how evolutionary studies can be carried out in mechanistic models, the evolution of the timing of reproduction is investigated. Thus, these two lines of research, derivation of mechanistic population models and evolutionary studies, are complementary to each other.
Resumo:
Both the competitive environment and the internal structure of an industrial organization are typically included in the processes which describe the strategic management processes of the firm, but less attention has been paid to the interdependence between these views. Therefore, this research focuses on explaining the particular conditions of an industry change, which lead managers to realign the firm in respect of its environment for generating competitive advantage. The research question that directs the development of the theoretical framework is: Why do firms outsource some of their functions? The three general stages of the analysis are related to the following research topics: (i) understanding forces that shape the industry, (ii) estimating the impacts of transforming customer preferences, rivalry, and changing capability bases on the relevance of existing assets and activities, and emergence of new business models, and (iii) developing optional structures for future value chains and understanding general boundaries for market emergence. The defined research setting contributes to the managerial research questions “Why do firms reorganize their value chains?”, “Why and how are decisions made?” Combining Transaction Cost Economics (TCE) and Resource-Based View (RBV) within an integrated framework makes it possible to evaluate the two dimensions of a company’s resources, namely the strategic value and transferability. The final decision of restructuring will be made based on an analysis of the actual business potential of the outsourcing, where benefits and risks are evaluated. The firm focuses on the risk of opportunism, hold-up problems, pricing, and opportunities to reach a complete contract, and finally on the direct benefits and risks for financial performance. The supplier analyzes the business potential of an activity outside the specific customer, the amount of customer-specific investments, the service provider’s competitive position, abilities to revenue gains in generic segments, and long-term dependence on the customer.
Resumo:
Genetic diversity is one of the levels of biodiversity that the World Conservation Union (IUCN) has recognized as being important to preserve. This is because genetic diversity is fundamental to the future evolution and to the adaptive flexibility of a species to respond to the inherently dynamic nature of the natural world. Therefore, the key to maintaining biodiversity and healthy ecosystems is to identify, monitor and maintain locally-adapted populations, along with their unique gene pools, upon which future adaptation depends. Thus, conservation genetics deals with the genetic factors that affect extinction risk and the genetic management regimes required to minimize the risk. The conservation of exploited species, such as salmonid fishes, is particularly challenging due to the conflicts between different interest groups. In this thesis, I conduct a series of conservation genetic studies on primarily Finnish populations of two salmonid fish species (European grayling, Thymallus thymallus, and lake-run brown trout, Salmo trutta) which are popular recreational game fishes in Finland. The general aim of these studies was to apply and develop population genetic approaches to assist conservation and sustainable harvest of these populations. The approaches applied included: i) the characterization of population genetic structure at national and local scales; ii) the identification of management units and the prioritization of populations for conservation based on evolutionary forces shaping indigenous gene pools; iii) the detection of population declines and the testing of the assumptions underlying these tests; and iv) the evaluation of the contribution of natural populations to a mixed stock fishery. Based on microsatellite analyses, clear genetic structuring of exploited Finnish grayling and brown trout populations was detected at both national and local scales. Finnish grayling were clustered into three genetically distinct groups, corresponding to northern, Baltic and south-eastern geographic areas of Finland. The genetic differentiation among and within population groups of grayling ranged from moderate to high levels. Such strong genetic structuring combined with low genetic diversity strongly indicates that genetic drift plays a major role in the evolution of grayling populations. Further analyses of European grayling covering the majority of the species’ distribution range indicated a strong global footprint of population decline. Using a coalescent approach the beginning of population reduction was dated back to 1 000-10 000 years ago (ca. 200-2 000 generations). Forward simulations demonstrated that the bottleneck footprints measured using the M ratio can persist within small populations much longer than previously anticipated in the face of low levels of gene flow. In contrast to the M ratio, two alternative methods for genetic bottleneck detection identified recent bottlenecks in six grayling populations that warrant future monitoring. Consistent with the predominant role of random genetic drift, the effective population size (Ne) estimates of all grayling populations were very low with the majority of Ne estimates below 50. Taken together, highly structured local populations, limited gene flow and the small Ne of grayling populations indicates that grayling populations are vulnerable to overexploitation and, hence, monitoring and careful management using the precautionary principles is required not only in Finland but throughout Europe. Population genetic analyses of lake-run brown trout populations in the Inari basin (northernmost Finland) revealed hierarchical population structure where individual populations were clustered into three population groups largely corresponding to different geographic regions of the basin. Similar to my earlier work with European grayling, the genetic differentiation among and within population groups of lake-run brown trout was relatively high. Such strong differentiation indicated that the power to determine the relative contribution of populations in mixed fisheries should be relatively high. Consistent with these expectations, high accuracy and precision in mixed stock analysis (MSA) simulations were observed. Application of MSA to indigenous fish caught in the Inari basin identified altogether twelve populations that contributed significantly to mixed stock fisheries with the Ivalojoki river system being the major contributor (70%) to the total catch. When the contribution of wild trout populations to the fisheries was evaluated regionally, geographically nearby populations were the main contributors to the local catches. MSA also revealed a clear separation between the lower and upper reaches of Ivalojoki river system – in contrast to lower reaches of the Ivalojoki river that contributed considerably to the catch, populations from the upper reaches of the Ivalojoki river system (>140 km from the river mouth) did not contribute significantly to the fishery. This could be related to the available habitat size but also associated with a resident type life history and increased cost of migration. The studies in my thesis highlight the importance of dense sampling and wide population coverage at the scale being studied and also demonstrate the importance of critical evaluation of the underlying assumptions of the population genetic models and methods used. These results have important implications for conservation and sustainable fisheries management of Finnish populations of European grayling and brown trout in the Inari basin.
Resumo:
The purpose of this thesis is to develop an environment or network that enables effective collaborative product structure management among stakeholders in each unit, throughout the entire product lifecycle and product data management. This thesis uses framework models as an approach to the problem. Framework model methods for development of collaborative product structure management are proposed in this study, there are three unique models depicted to support collaborative product structure management: organization model, process model and product model. In the organization model, the formation of product data management system (eDSTAT) key user network is specified. In the process model, development is based on the case company’s product development matrix. In the product model framework, product model management, product knowledge management and design knowledge management are defined as development tools and collaboration is based on web-based product structure management. Collaborative management is executed using all these approaches. A case study from an actual project at the case company is presented as an implementation; this is to verify the models’ applicability. A computer assisted design tool and the web-based product structure manager, have been used as tools of this collaboration with the support of the key user. The current PDM system, eDSTAT, is used as a piloting case for key user role. The result of this development is that the role of key user as a collaboration channel is defined and established. The key user is able to provide one on one support for the elevator projects. Also the management activities are improved through the application of process workflow by following criteria for each project milestone. The development shows effectiveness of product structure management in product lifecycle, improved production process by eliminating barriers (e.g. improvement of two-way communication) during design phase and production phase. The key user role is applicable on a global scale in the company.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Alpha2-Adrenoceptors: structure and ligand binding properties at the molecular level The mouse is the most frequently used animal model in biomedical research, but the use of zebrafish as a model organism to mimic human diseases is on the increase. Therefore it is considered important to understand their pharmacological differences from humans also at the molecular level. The zebrafish Alpha2-adrenoceptors were expressed in mammalian cells and the binding affinities of 20 diverse ligands were determined and compared to the corresponding human receptors. The pharmacological properties of the human and zebrafish Alpha2--adrenoceptors were found to be quite well conserved. Receptor models based on the crystal structures of bovine rhodopsin and the human Beta2-adrenoceptor revealed that most structural differences between the paralogous and orthologous Alpha2--adrenoceptors were located within the second extracellular loop (XL2). Reciprocal mutations were generated in the mouse and human Alpha2--adrenoceptors. Ligand binding experiments revealed that substitutions in XL2 reversed the binding profiles of the human and mouse Alpha2--adrenoceptors for yohimbine, rauwolscine and RS-79948-197, evidence for a role for XL2 in the determination of species-specific ligand binding. Previous mutagenesis studies had not been able to explain the subtype preference of several large Alpha2--adrenoceptor antagonists. We prepared chimaeric Alpha2--adrenoceptors where the first transmembrane (TM1) domain was exchanged between the three human Alpha2--adrenoceptor subtypes. The binding affinities of spiperone, spiroxatrine and chlorpromazine were observed to be significantly improved by TM1 substitutions of the Alpha2a--adrenoceptor. Docking simulations indicated that indirect effects, such as allosteric modulation, are more likely to be involved in this phenomenon rather than specific side-chain interactions between ligands and receptors.
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
Physical activity (PA) is an important field of healthcare research internationally and within Finland. As technology devices and services penetrate deeper levels within society, the need for studying the usefulness for PA turns vital. We started this research work by reviewing literature consisting of two hundred research journals, all of which have found technology to significantly improve an individual’s ability to get motivation and achieve officially recommended levels of physical activity, like the 10000 steps a day, being tracked with the help of pedometers. Physical activity recommendations require sustained encouragement, consistent performance in order to achieve the long term benefits. We surveyed within the city of Turku, how the motivation levels and thirty three other criterions encompassing technology awareness, adoption and usage attitudes are impacted. Our aim was to know the factors responsible for achieving consistent growth in activity levels within the individuals and focus groups, as well as to determine the causes of failures and for collecting user experience feedback. The survey results were quite interesting and contain impeccable information for this field. While the focus groups confirmed the theory established by past studies within our literature review, it also establishes our research propositions that ict tools and services have provided and can further add higher benefits and value to individuals in tracking and maintain their activity levels consistently for longer time durations. This thesis includes two new models which dictate technology and physical activity adoption patterns based on four easy to evaluate criterions, thereby helping the healthcare providers to recommend improvements and address issues with an easy rule based approach. This research work provides vital clues on technology based healthcare objectives and achievement of standard PA recommendations by people within Turku and nearby regions.
Resumo:
In this work mathematical programming models for structural and operational optimisation of energy systems are developed and applied to a selection of energy technology problems. The studied cases are taken from industrial processes and from large regional energy distribution systems. The models are based on Mixed Integer Linear Programming (MILP), Mixed Integer Non-Linear Programming (MINLP) and on a hybrid approach of a combination of Non-Linear Programming (NLP) and Genetic Algorithms (GA). The optimisation of the structure and operation of energy systems in urban regions is treated in the work. Firstly, distributed energy systems (DES) with different energy conversion units and annual variations of consumer heating and electricity demands are considered. Secondly, district cooling systems (DCS) with cooling demands for a large number of consumers are studied, with respect to a long term planning perspective regarding to given predictions of the consumer cooling demand development in a region. The work comprises also the development of applications for heat recovery systems (HRS), where paper machine dryer section HRS is taken as an illustrative example. The heat sources in these systems are moist air streams. Models are developed for different types of equipment price functions. The approach is based on partitioning of the overall temperature range of the system into a number of temperature intervals in order to take into account the strong nonlinearities due to condensation in the heat recovery exchangers. The influence of parameter variations on the solutions of heat recovery systems is analysed firstly by varying cost factors and secondly by varying process parameters. Point-optimal solutions by a fixed parameter approach are compared to robust solutions with given parameter variation ranges. In the work enhanced utilisation of excess heat in heat recovery systems with impingement drying, electricity generation with low grade excess heat and the use of absorption heat transformers to elevate a stream temperature above the excess heat temperature are also studied.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
The purpose of this study is to examine and explain firm`s growth impact on capital structure decision-making in research and development intensive companies. Many studies claim that R&D has a pivotal impact on capital structure decisions, but corporate finance theories have often failed to explain these observed patterns. As sales growth is an important concept and objective for R&D firms, it is logical to assume that it plays a vital role in capital structure decisions. This study applies nomothetic research approach. The theoretical part employs a formal conceptual analysis in order to develop the propositions that are tested with empirical data. The empirical part consists of the analysis of three companies; the data is obtained from the annual reports over the period 2003 – 2008. The companies operate in IT- or ICT-industry and are publicly listed. The method for analyzing the case data is based on the financial indicators, which are obtained from the financials of the case companies. These economic indicators describe the capital structure and the financial decision-making of the firms. The method relates to the quantitative studies. Yet, this study extends the analysis beyond the indicators. Specifically, this study addresses the question of what is behind the economic indicators, therefore combining aspects of quantitative and qualitative analysis. The firms examined in this study seem to prefer internal finance during growth. However, external finance seems to be a catalyst for sales growth. Firms strongly prefer equity financing. In growth, the use of equity per capital either increases or stays in a constant level. Over the period 2003 – 2008, the firms were often associated to equity related transactions and short-term debt. Short-term debt was used as a substitute of long-term debt and equity. The case firms also adjusted their capital structure – these adjustments were carried out with short-term debt or equity. The case data also provides implications for the growth signal theory that was developed in this study. Based on the econometric indicators, arguments can be made that equity investors are `attracted` to growing R&D firms. This is because growth helps investors perceive the true type of firm. The findings of this study are best explained by the trade-off theory and the pecking order theory. These corporate finance theories are considered as mainstream. Little support can be found to the implications of the signaling theory and market timing theory.