84 resultados para New career models
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
Atherosclerosis is a life-long vascular inflammatory disease and the leading cause of death in Finland and in other western societies. The development of atherosclerotic plaques is progressive and they form when lipids begin to accumulate in the vessel wall. This accumulation triggers the migration of inflammatory cells that is a hallmark of vascular inflammation. Often, this plaque will become unstable and form vulnerable plaque which may rupture causing thrombosis and in the worst case, causing myocardial infarction or stroke. Identification of these vulnerable plaques before they rupture could save lives. At present, in the clinic, there exists no appropriated, non-invasive method for their identification. The aim of this thesis was to evaluate novel positron emission tomography (PET) probes for the detection of vulnerable atherosclerotic plaques and to characterize, two mouse models of atherosclerosis. These studies were performed by using ex vivo and in vivo imaging modalities. The vulnerability of atherosclerotic plaques was evaluated as expression of active inflammatory cells, namely macrophages. Age and the duration of high-fat diet had a drastic impact on the development of atherosclerotic plaques in mice. In imaging of atherosclerosis, 6-month-old mice, kept on high-fat diet for 4 months, showed matured, metabolically active, atherosclerotic plaques. [18F]FDG and 68Ga were accumulated in the areas representative of vulnerable plaques. However, the slow clearance of 68Ga limits its use for the plaque imaging. The novel synthesized [68Ga]DOTA-RGD and [18F]EF5 tracers demonstrated efficient uptake in plaques as compared to the healthy vessel wall, but the pharmacokinetic properties of these tracers were not optimal in used models. In conclusion, these studies resulted in the identification of new strategies for the assessment of plaque stability and mouse models of atherosclerosis which could be used for plaque imaging. In the used probe panel, [18F]FDG was the best tracer for plaque imaging. However, further studies are warranted to clarify the applicability of [18F]EF5 and [68Ga]DOTA-RGD for imaging of atherosclerosis with other experimental models.
Resumo:
The melanocortin peptides, including melanocyte-stimulating hormones, α-, β- and γ-MSH, are derived from the precursor peptide proopiomelanocortin and mediate their biological actions via five different melanocortin receptors, named from MC1 to MC5. Melanocortins have been implicated in the central regulation of energy balance and cardiovascular functions, but their local effects, via yet unidentified sites of action, in the vasculature, and their therapeutic potential in major vascular pathologies remain unclear. Therefore, the main aim of this thesis was to characterise the role of melanocortins in circulatory regulation, and to investigate whether targeting of the melanocortin system by pharmacological means could translate into therapeutic benefits in the treatment of cardiovascular diseases such as hypertension. In experiments designed to elucidate the local effects of α-MSH on vascular tone, it was found that α-MSH improved blood vessel relaxation via a nitric oxide (NO)-dependent mechanism without directly contracting or relaxing blood vessels. Furthermore, α-MSH was shown to regulate the expression and function of endothelial NO synthase in cultured human endothelial cells via melanocortin 1 receptors. In keeping with the vascular protective role, pharmacological treatment of mice with α-MSH analogues displayed therapeutic efficacy in conditions associated with vascular dysfunction such as obesity. Furthermore, α-MSH analogues elicited marked diuretic and natriuretic responses, which together with their vascular effects, seemed to provide protection against sodium retention and blood pressure elevation in experimental models of hypertension. In conclusion, the present results identify novel effects for melanocortins in the local control of vascular function, pointing to the potential future use of melanocortin analogues in the treatment of cardiovascular pathologies.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
Poverty alleviation views have shifted from seeing the poor as victims or as potential consumers, to seeing them as gainers. Social businesses include microfinancing and microfranchising, which engage people at the bottom of the pyramid using business instead of charity. There are, however, social business firms that do not fit to the existing social business model theory. These firms provide markets to poor producers and mix traditional, local craftsmanship with western design. Social business models evolve faster than the academic literature can study them. This study contributes to filling this gap. The purpose of this Master’s thesis is to develop the concept of social business as poverty alleviation method in developing countries. It also aims; 1) to describe the means for poverty alleviation in developing countries; 2) to introduce microbusiness as a social business model; and 3) to examine the challenges of microbusinesses. Qualitative case study is used as a research strategy and theme interviews as a data collecting method. The empirical data is gathered from four interviews of Finnish or Finnish-owned firms that employ microbusiness – Mifuko, Tensira, Mangomaa and Tikau – and this is supported with secondary data including articles on case companies. The results show that microbusiness is a valid new social business model that aims at poverty alleviation by engaging the poor at the bottom of the pyramid. It is possible to map the value proposition, value constellation, and economic and social profit equations of the case firms. Two major types of firms emerge from the results; the first consists of design-oriented firms that emphasize the quality and design of the products, and the second consists of bazaar-like firms whose product portfolio is less sophisticated and who promote more the stories of the products – not the design. All microbusiness firms provide markets, promote traditional handicrafts, form close relationships to their producers, and aim at enhancing lives through their businesses. The attitudes towards social businesses are sometimes negative, but this is changing for the better. In conclusion, microbusiness answers to two different needs at the same time – consumers’ needs for ethical products and the social needs of the producers – but the social need is the ultimate reason why the entrepreneurs started business. Microbusiness continues as a poverty alleviation tool that sees the poor as gainers; by providing them steady employment, microbusiness increases the poor’s self-esteem and enables them for a better living. Academic literature has not been able to offer enough alternative business models to cover all social businesses; the current study contributes to this by concluding that microbusiness is another social business model.
Resumo:
To manage foreign operations, companies must often send their employees on international assignments. Repatriating these expatriates can be difficult because they have been forgotten during their posting, and their new experiences are not utilised. In addition to the possible difficulties in organisational repatriation, the returnee can suffer from readjustment problems after a lengthy stay abroad has changed their habits and even identity. This thesis examines the repatriation experience of Finnish assignees returning from Russia. The purpose of the study is to understand how the repatriation experience influences their readjustment to work in Finland. This experience is influenced by many factors including personal and situational changes, the repatriation process, job and organisational factors, and individual’s motives. The theoretical background of the study is founded on two models of repatriation adjustment. A refined, holistic theoretical framework for the study is created. It describes the formation of the repatriation experience and its importance for readjustment to work and retention. The qualitative research approach is suitable for the thesis which examines the returnees’ personal experiences and feelings: a qualitative case study aims to explain the phenomenon in-depth and comprehensively. The data was collected in summer 2013 through semi-standardised interviews with eight Finnish repatriates. They had returned from Russia within the last two years. The data was analysed by structuring the interview transcripts using template analysis. The results supported earlier literature and suggest that the re-entry remains a challenging phase for both the individual and the company. For some, adjusting to a new job was difficult for various reasons. The repatriates underwent personal change and development and felt it was for the better. Many repatriates criticised the company’s repatriation process upon return. Finding a suitable return job was not clear. Instead, the returnees had to be active in finding a new position. Many assignees had only modest career-related motives regarding the assignment and they had realistic expectations about the return. Therefore they were not extremely surprised or dissatisfied when they were not actively offered positions or support by the company. The significance of motives stood out even more than the theory predicted. As predicted, they are linked to the expectations of employees. Moreover, if the employees are motivated to remain in the company, they can tolerate partly a negative repatriation experience. Despite the complexity of the return and readjustment, the assignment as a whole was seen as a rewarding experience by all participants.
Resumo:
Netnography has been studying in various aspects (e.g. definitions of netnography, application of netngoraphy, conducting procedure…) within different industrial contexts. Besides, there are many studies and researches about new product development in various perspectives, such as new product development models, management of new product development project, or interaction between customers and new product design, and so on. However, the connection and the interaction between netnography and new product development have not been studied recently. This opens opportunities for the writer to study and explore unrevealed issues regarding to applying netnography in new product development. In term of the relation between netnography and new product development, there are numerous of matters need to be explored; for instance, the process of applying netnography in order to benefit to new product development, the involvement degree of netnography in new product development process, or eliminating useless information from netnography so that only crucial data is utilized, and so on. In this thesis, writer focuses on exploring how netnography is applied in new product development process, and what benefits netnography can contribute to the succeed of the project. The aims of this study are to understand how netnography is conducted for new product development purpose, and to analyse the contributions of netnography in the new product development process. To do so, a case-study strategy will be conducted with triple case studies. The case studies are chosen bases on many different criteria in order to select the most relevant cases. Eventually, the writer selected three case studies, which are Sunless tanning product project (HYVE), Listerine (NetBase), and Nivea co-creation and netnography in black and white deodorant. The case study strategy applied in this thesis includes four steps e.g. case selection, data collection, case study analysis, and generating the research outcomes from the analysis. This study of the contributions of netnography in the new product development process may be useful for the readers in many ways. It offers the fundamental knowledge of netnography market research method and basic understanding of new product development process. Additionally, it emphasizes the differences between netnography and other market research methods in order to explain the reasons why many companies and market research agents recently utilized netnography in their market research projects. Furthermore, it highlights the contributions of netnography in the new product development process in order to indicate the importance of netnography in developing new product. Thus, the potential readers of the study can be students, marketers, researchers, product developers, or business managers.
Resumo:
The goal of the thesis is to analyze the strengths and weaknesses of solar PV business model and point out key factors that affect the efficiency of business model, the results are expected to help in creating new business strategy. The methodology of case study research is chosen as theoretical background to structure the design of the thesis indicating how to choose the right research method and conduction of a case study research. Business model canvas is adopted as the tool for analyzing the case studies of SolarCity and Sungevity. The results are presented through the comparison between the cases studies. Solar services and products, cost in customer acquisition, intellectual resource and powerful sales channels are identified as the major factors for TPO model.
Resumo:
The main goal of this study is to create a seamless chain of actions and more detailed structure to the front end of innovation to be able to increase the front end performance and finally to influence the renewal of companies. The main goal is achieved through by the new concept of an integrated model of early activities of FEI leading to a discovery of new elements of opportunities and the identification of new business and growth areas. The procedure offers one possible solution to a dynamic strategy formation process in innovation development cycle. In this study the front end of innovation is positioned between a strategy reviews and a concept creation with needed procedures, tools, and frameworks. The starting point of the study is that the origins of innovation are not well enough understood. The study focuses attention on the early activities of FEI. These first activities are conceptualized in order to find out successful innovation initiatives and strategic renewal agendas. A seamless chain of activities resulting in faster and more precise identification of opportunities and growth areas available on markets and inside companies is needed. Three case studies were conducted in order to study company views on available theory doctrine and to identify the first practical experiences and procedures in the beginning of the front end of innovation. Successful innovation requires focus on renewal in both internal and external directions and they should be carefully balanced for best results. Instead of inside-out mode of actions the studied companies have a strong outside-in thinking mode and they mainly co-develop their innovation initiatives in close proximity with customers i.e. successful companies are an integral part of customers business and success. Companies have tailor-made innovation processes combined their way of working linked to their business goals, and priorities of actual needs of transformation. The result of this study is a new modular FEI platform which can be configured by companies against their actual business needs and drivers. This platform includes new elements of FEI documenting an architecture presenting how the system components work together. The system is a conceptual approach from theories of emergent strategy formation, opportunity identification and creation, interpretation-analysis-experimentation triad and the present FEI theories. The platform includes new features compared to actual models of FEI. It allows managers to better understand the importance of FEI in the whole innovation development stage and FEI as a phase and procedure to discover and implement emergent strategy. An adaptable company rethinks and redirects strategy proactively from time to time. Different parts of the business model are changed to remove identified obstacles for growth and renewal which gives them avenues to find right reforms for renewal.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The purpose of this Master’s thesis was to study the business model development in Finnish newspaper industry during the next then years through scenario planning. The objective was to see how will the business models develop amidst the many changes in the industry, what factors are affecting the change, what are the implications of these changes for the players in the industry and how should the Finnish newspaper companies evolve in order to succeed in the future. In this thesis the business model change is studied based on all the elements of business models, as it was discovered that the industry is too often focusing on changes in only few of those elements and a more broader view can provide valuable information for the companies. The results revealed that the industry is affected by many changes during the next ten years. Scenario planning provides a good tool for analyzing this change and for developing valuable options for businesses. After conducting series of interviews and discovering forces affecting the change, four different scenarios were developed centered on the role that newspaper will take and the level at which they are providing the content in the future. These scenarios indicated that there are varieties of options in the way the business models may develop and that companies should start making decisions proactively in order to succeed. As the business model elements are interdepended, changes made in the other elements will affect the whole model, making these decisions about the role and level of content important for the companies. In the future, it is likely that the Finnish newspaper industry will include many different kinds of business models, some of which can be drastically different from the current ones and some of which can still be similar, but take better into account the new kind of media environment.
Resumo:
Alzheimer’s disease (AD) is the most common form of dementia. Characteristic changes in an AD brain are the formation of β-amyloid protein (Aβ) plaques and neurofibrillary tangles, though other alterations in the brain have also been connected to AD. No cure is available for AD and it is one of the leading causes of death among the elderly in developed countries. Liposomes are biocompatible and biodegradable spherical phospholipid bilayer vesicles that can enclose various compounds. Several functional groups can be attached on the surface of liposomes in order to achieve long-circulating target-specific liposomes. Liposomes can be utilized as drug carriers and vehicles for imaging agents. Positron emission tomography (PET) is a non-invasive imaging method to study biological processes in living organisms. In this study using nucleophilic 18F-labeling synthesis, various synthesis approaches and leaving groups for novel PET imaging tracers have been developed to target AD pathology in the brain. The tracers were the thioflavin derivative [18F]flutemetamol, curcumin derivative [18F]treg-curcumin, and functionalized [18F]nanoliposomes, which all target Aβ in the AD brain. These tracers were evaluated using transgenic AD mouse models. In addition, 18F-labeling synthesis was developed for a tracer targeting the S1P3 receptor. The chosen 18F-fluorination strategy had an effect on the radiochemical yield and specific activity of the tracers. [18F]Treg-curcumin and functionalized [18F]nanoliposomes had low uptake in AD mouse brain, whereas [18F]flutemetamol exhibited the appropriate properties for preclinical Aβ-imaging. All of these tracers can be utilized in studies of the pathology and treatment of AD and related diseases.
Resumo:
A new area of machine learning research called deep learning, has moved machine learning closer to one of its original goals: artificial intelligence and general learning algorithm. The key idea is to pretrain models in completely unsupervised way and finally they can be fine-tuned for the task at hand using supervised learning. In this thesis, a general introduction to deep learning models and algorithms are given and these methods are applied to facial keypoints detection. The task is to predict the positions of 15 keypoints on grayscale face images. Each predicted keypoint is specified by an (x,y) real-valued pair in the space of pixel indices. In experiments, we pretrained deep belief networks (DBN) and finally performed a discriminative fine-tuning. We varied the depth and size of an architecture. We tested both deterministic and sampled hidden activations and the effect of additional unlabeled data on pretraining. The experimental results show that our model provides better results than publicly available benchmarks for the dataset.