890 resultados para 780000 - Non-Oriented Research
Resumo:
Despite decades of research, therapeutic advances in non-small cell lung cancer (NSCLC) have progressed at a painstaking slow rate with few improvements in standard surgical resection for early stage disease and chemotherapy or radiotherapy for patients with advanced disease. In the past 18 months, however, we seemed to have reached an inflexion point: therapeutic advances that are centred on improvements in the understanding of patient selection, surgery that is undertaken through smaller incisions, identification of candidate mutations accompanied by the development of targeted anticancer treatments with a focus on personalised medicine, improvements to radiotherapy technology, emergence of radiofrequency ablation (RFA), and last but by no means least, the recognition of palliative care as a therapeutic modality in its own right. The contributors to this review are a distinguished international panel of experts who highlight recent advances in each of the major disciplines.
Resumo:
Breakthrough technologies which now enable the sequencing of individual genomes will irreversibly modify the way diseases are diagnosed, predicted, prevented and treated. For these technologies to reach their full potential requires, upstream, access to high-quality biomedical data and samples from large number of properly informed and consenting individuals and, downstream, the possibility to transform the emerging knowledge into a clinical utility. The Lausanne Institutional Biobank was designed as an integrated, highly versatile infrastructure to harness the power of these emerging technologies and catalyse the discovery and development of innovative therapeutics and biomarkers, and advance the field of personalised medicine. Described here are its rationale, design and governance, as well as parallel initiatives which have been launched locally to address the societal, ethical and technological issues associated with this new bio-resource. Since January 2013, inpatients admitted at Lausanne CHUV University Hospital have been systematically invited to provide a general consent for the use of their biomedical data and samples for research, to complete a standardised questionnaire, to donate a 10-ml sample of blood for future DNA extraction and to be re-contacted for future clinical trials. Over the first 18 months of operation, 14,459 patients were contacted, and 11,051 accepted to participate in the study. This initial 18-month experience illustrates that a systematic hospital-based biobank is feasible; it shows a strong engagement in research from the patient population in this University Hospital setting, and the need for a broad, integrated approach for the future of medicine to reach its full potential.
Resumo:
BACKGROUND: Non-communicable diseases (NCDs) are increasing worldwide. We hypothesize that environmental factors (including social adversity, diet, lack of physical activity and pollution) can become "embedded" in the biology of humans. We also hypothesize that the "embedding" partly occurs because of epigenetic changes, i.e., durable changes in gene expression patterns. Our concern is that once such factors have a foundation in human biology, they can affect human health (including NCDs) over a long period of time and across generations. OBJECTIVES: To analyze how worldwide changes in movements of goods, persons and lifestyles (globalization) may affect the "epigenetic landscape" of populations and through this have an impact on NCDs. We provide examples of such changes and effects by discussing the potential epigenetic impact of socio-economic status, migration, and diet, as well as the impact of environmental factors influencing trends in age at puberty. DISCUSSION: The study of durable changes in epigenetic patterns has the potential to influence policy and practice; for example, by enabling stratification of populations into those who could particularly benefit from early interventions to prevent NCDs, or by demonstrating mechanisms through which environmental factors influence disease risk, thus providing compelling evidence for policy makers, companies and the civil society at large. The current debate on the '25 × 25 strategy', a goal of 25% reduction in relative mortality from NCDs by 2025, makes the proposed approach even more timely. CONCLUSIONS: Epigenetic modifications related to globalization may crucially contribute to explain current and future patterns of NCDs, and thus deserve attention from environmental researchers, public health experts, policy makers, and concerned citizens.
Resumo:
Bio-binders can be utilized as asphalt modifiers, extenders, and replacements for conventional asphalt in bituminous binders. From the rheology results of Phase I of this project, it was found that the bio-binders tested had good performance, similar to conventional asphalt, except at low temperatures. Phase II of this project addresses this shortcoming and evaluates the Superpave performance of laboratory mixes produced with the enhanced bio-binders. The main objective of this research was to develop a bio-binder capable of replacing conventional asphalt in flexible pavements by incorporating ground tire rubber (GTR) into bio-oil derived from fast pyrolysis of agriculture and forestry residues. The chemical compatibility of the new bio-binder with GTR was assessed, and the low-temperature performance of the bio-binders was enhanced by the use of GTR. The newly developed binder, which consisted of 80 percent conventional binder and 20 percent rubber-modified bio-oil (85 percent bio-oil with 15 percent GTR), was used to produce mixes at two different air void contents, 4 and 7 percent. The laboratory performance test results showed that the performance of the newly developed bio-binder mixes is as good as or better than conventional asphalt mixes for fatigue cracking, rutting resistance, moisture sensitivity, and low-temperature cracking. These results need to be validated in field projects in order to demonstrate adequate performance for this innovative and sustainable technology for flexible pavements.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The use of quantum dots (QDs) in the area of fingermark detection is currently receiving a lot of attention in the forensic literature. Most of the research efforts have been devoted to cadmium telluride (CdTe) quantum dots often applied as powders to the surfaces of interests. Both the use of cadmium and the nano size of these particles raise important issues in terms of health and safety. This paper proposes to replace CdTe QDs by zinc sulphide QDs doped with copper (ZnS:Cu) to address these issues. Zinc sulphide-copper doped QDs were successfully synthesized, characterized in terms of size and optical properties and optimized to be applied for the detection of impressions left in blood, where CdTe QDs proved to be efficient. Effectiveness of detection was assessed in comparison with CdTe QDs and Acid Yellow 7 (AY7, an effective blood reagent), using two series of depletive blood fingermarks from four donors prepared on four non-porous substrates, i.e. glass, transparent polypropylene, black polyethylene and aluminium foil. The marks were cut in half and processed separately with both reagents, leading to two comparison series (ZnS:Cu vs. CdTe, and ZnS:Cu vs. AY7). ZnS:Cu proved to be better than AY7 and at least as efficient as CdTe on most substrates. Consequently, copper-doped ZnS QDs constitute a valid substitute for cadmium-based QDs to detect blood marks on non-porous substrates and offer a safer alternative for routine use.
Resumo:
The institutional regimes framework has previously been applied to the institutional conditions that support or hinder the sustainability of housing stocks. This resource-based approach identifies the actors across different sectors that have an interest in housing, how they use housing, the mechanisms affecting their use (public policy, use rights, contracts, etc.) and the effects of their uses on the sustainability of housing within the context of the built environment. The potential of the institutional regimes framework is explored for its suitability to the many considerations of housing resilience. By identifying all the goods and services offered by the resource 'housing stock', researchers and decision-makers could improve the resilience of housing by better accounting for the ecosystem services used by housing, decreasing the vulnerability of housing to disturbances, and maximizing recovery and reorganization following a disturbance. The institutional regimes framework is found to be a promising tool for addressing housing resilience. Further questions are raised for translating this conceptual framework into a practical application underpinned with empirical data.
Resumo:
Diffusion MRI has evolved towards an important clinical diagnostic and research tool. Though clinical routine is using mainly diffusion weighted and tensor imaging approaches, Q-ball imaging and diffusion spectrum imaging techniques have become more widely available. They are frequently used in research-oriented investigations in particular those aiming at measuring brain network connectivity. In this work, we aim at assessing the dependency of connectivity measurements on various diffusion encoding schemes in combination with appropriate data modeling. We process and compare the structural connection matrices computed from several diffusion encoding schemes, including diffusion tensor imaging, q-ball imaging and high angular resolution schemes, such as diffusion spectrum imaging with a publically available processing pipeline for data reconstruction, tracking and visualization of diffusion MR imaging. The results indicate that the high angular resolution schemes maximize the number of obtained connections when applying identical processing strategies to the different diffusion schemes. Compared to the conventional diffusion tensor imaging, the added connectivity is mainly found for pathways in the 50-100mm range, corresponding to neighboring association fibers and long-range associative, striatal and commissural fiber pathways. The analysis of the major associative fiber tracts of the brain reveals striking differences between the applied diffusion schemes. More complex data modeling techniques (beyond tensor model) are recommended 1) if the tracts of interest run through large fiber crossings such as the centrum semi-ovale, or 2) if non-dominant fiber populations, e.g. the neighboring association fibers are the subject of investigation. An important finding of the study is that since the ground truth sensitivity and specificity is not known, the comparability between results arising from different strategies in data reconstruction and/or tracking becomes implausible to understand.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
Lappeenrannan teknillinen yliopiston Tietotekniikan osaston Tietojenkäsittelytieteen laitoksen tutkimuskäytössä olevaan liikkuvaan robottiin toteutettiin tässä työssä graafinen kaukokäyttöliittymä. Työlle on motivaationa laajennettavuus, jota olemassaoleva suljetun lähdekoodin käyttöliittymä ei pysty tarjoamaan. Työssä olennaisin on olio-ohjelmointitekniikalla toteutettu robotin datamallin, ja sen graafisen esityksen arkkitehtuurillinen erottaminen. Lisäksi tarkastellaan lyhyesti liikkuvien robottien kaukokäyttöliittymien teoriaa, ja WLAN-tekniikan soveltuvuutta robotin ja käyttöliittymän välisen yhteyden toteuttamiseen.
Analysing the competitive advantage of Internet based marketing research company starting in Finland
Resumo:
The purpose of this master's thesis was to analyse a competitive advantage of an Internet based marketing research company based on a competitive strategy oriented way. First Internet panel was compared to mostly used marketing research method, telephone interview. Secondly fourteen potential clients were interviewed personally. Intention was to find out what the potential clients thinkabout Zapera Finland Ltd and what kind of competitive strategy could be chosen considering costs, product differentiation, competition, research method, segmentation of business line and substitution. Finally the interviews were analysed and some strategic suggestions were made based on the competitive advatage(s). Conclusion was that Zapera Finland Ltd can choose a competitive strategy based on both the cost advantage and the product differentiation in a narrow competition scope.
Resumo:
Avoimesta innovaatiosta ja innovaatioiden tehokkaasta hyödyntämisestä on tulossa tärkeitä osia yritysten T&K-prosesseihin. Diplomityön tarkoituksena on luoda viitekehys teknologioiden, jotka eivät kuulu yrityksen ydinliiketoimintaan, tehokkaampaan hallinnointiin tutkimusorganisaatiossa. Konstruktiivinen viitekehys on rakennettu pohjautuen aineettomien pääomien johtamisen ja portfolion hallinnoinnin teorioihin. Lisäksi työssä määritellään työkaluja jatekniikoita ylijäämäteknologioiden arviointiin. Uutta ylijäämäteknologioiden portfoliota voidaan hyödyntää hakukoneena, ideapankkina, kommunikaatiotyökaluna tai teknologioiden markkinapaikkana. Sen johtaminen koostuu tietojen dokumentoinnista järjestelmään, teknologioiden arvioinnista ja portfolion päivityksestä ja ylläpidosta.
Resumo:
BACKGROUND: Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker's background. METHODS: Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients' data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. RESULTS: At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. CONCLUSIONS: Non-RTW may be predicted with a simple model constructed with variables independent of the patient's education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers.
Resumo:
Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. L'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Récemment, l?eau liquide a été décrite comme une structure formée d?un réseau aléatoire de liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure à basse température, certaines liaisons hydrogènes sont détruites ce qui est énergétiquement défavorable. Les molécules d?eau s?arrangent alors autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l?eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise les liaisons hydrogènes. Maintenant, la dissolution des particules devient énergétiquement défavorable, et les particules se séparent de l?eau en formant des agrégats qui minimisent leur surface exposée à l?eau. Pourtant, à très haute température, les effets entropiques deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d?eau. En utilisant un modèle basé sur ces changements de structure formée par des liaisons hydrogènes j?ai pu reproduire les phénomènes principaux liés à l?hydrophobicité. J?ai trouvé une région de coexistence de deux phases entre les températures critiques inférieure et supérieure de solubilité, dans laquelle les particules hydrophobes s?agrègent. En dehors de cette région, les particules sont dissoutes dans l?eau. J?ai démontré que l?interaction hydrophobe est décrite par un modèle qui prend uniquement en compte les changements de structure de l?eau liquide en présence d?une particule hydrophobe, plutôt que les interactions directes entre les particules. Encouragée par ces résultats prometteurs, j?ai étudié des solutions aqueuses de particules hydrophobes en présence de co-solvants cosmotropiques et chaotropiques. Ce sont des substances qui stabilisent ou déstabilisent les agrégats de particules hydrophobes. La présence de ces substances peut être incluse dans le modèle en décrivant leur effet sur la structure de l?eau. J?ai pu reproduire la concentration élevée de co-solvants chaotropiques dans le voisinage immédiat de la particule, et l?effet inverse dans le cas de co-solvants cosmotropiques. Ce changement de concentration du co-solvant à proximité de particules hydrophobes est la cause principale de son effet sur la solubilité des particules hydrophobes. J?ai démontré que le modèle adapté prédit correctement les effets implicites des co-solvants sur les interactions de plusieurs corps entre les particules hydrophobes. En outre, j?ai étendu le modèle à la description de particules amphiphiles comme des lipides. J?ai trouvé la formation de différents types de micelles en fonction de la distribution des regions hydrophobes à la surface des particules. L?hydrophobicité reste également un sujet controversé en science des protéines. J?ai défini une nouvelle échelle d?hydrophobicité pour les acides aminés qui forment des protéines, basée sur leurs surfaces exposées à l?eau dans des protéines natives. Cette échelle permet une comparaison meilleure entre les expériences et les résultats théoriques. Ainsi, le modèle développé dans mon travail contribue à mieux comprendre les solutions aqueuses de particules hydrophobes. Je pense que les résultats analytiques et numériques obtenus éclaircissent en partie les processus physiques qui sont à la base de l?interaction hydrophobe.<br/><br/>Despite the importance of water in our daily lives, some of its properties remain unexplained. Indeed, the interactions of water with organic particles are investigated in research groups all over the world, but controversy still surrounds many aspects of their description. In my work I have tried to understand these interactions on a molecular level using both analytical and numerical methods. Recent investigations describe liquid water as random network formed by hydrogen bonds. The insertion of a hydrophobic particle at low temperature breaks some of the hydrogen bonds, which is energetically unfavorable. The water molecules, however, rearrange in a cage-like structure around the solute particle. Even stronger hydrogen bonds are formed between water molecules, and thus the solute particles are soluble. At higher temperatures, this strict ordering is disrupted by thermal movements, and the solution of particles becomes unfavorable. They minimize their exposed surface to water by aggregating. At even higher temperatures, entropy effects become dominant and water and solute particles mix again. Using a model based on these changes in water structure I have reproduced the essential phenomena connected to hydrophobicity. These include an upper and a lower critical solution temperature, which define temperature and density ranges in which aggregation occurs. Outside of this region the solute particles are soluble in water. Because I was able to demonstrate that the simple mixture model contains implicitly many-body interactions between the solute molecules, I feel that the study contributes to an important advance in the qualitative understanding of the hydrophobic effect. I have also studied the aggregation of hydrophobic particles in aqueous solutions in the presence of cosolvents. Here I have demonstrated that the important features of the destabilizing effect of chaotropic cosolvents on hydrophobic aggregates may be described within the same two-state model, with adaptations to focus on the ability of such substances to alter the structure of water. The relevant phenomena include a significant enhancement of the solubility of non-polar solute particles and preferential binding of chaotropic substances to solute molecules. In a similar fashion, I have analyzed the stabilizing effect of kosmotropic cosolvents in these solutions. Including the ability of kosmotropic substances to enhance the structure of liquid water, leads to reduced solubility, larger aggregation regime and the preferential exclusion of the cosolvent from the hydration shell of hydrophobic solute particles. I have further adapted the MLG model to include the solvation of amphiphilic solute particles in water, by allowing different distributions of hydrophobic regions at the molecular surface, I have found aggregation of the amphiphiles, and formation of various types of micelle as a function of the hydrophobicity pattern. I have demonstrated that certain features of micelle formation may be reproduced by the adapted model to describe alterations of water structure near different surface regions of the dissolved amphiphiles. Hydrophobicity remains a controversial quantity also in protein science. Based on the surface exposure of the 20 amino-acids in native proteins I have defined the a new hydrophobicity scale, which may lead to an improvement in the comparison of experimental data with the results from theoretical HP models. Overall, I have shown that the primary features of the hydrophobic interaction in aqueous solutions may be captured within a model which focuses on alterations in water structure around non-polar solute particles. The results obtained within this model may illuminate the processes underlying the hydrophobic interaction.<br/><br/>La vie sur notre planète a commencé dans l'eau et ne pourrait pas exister en son absence : les cellules des animaux et des plantes contiennent jusqu'à 95% d'eau. Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. En particulier, l'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Bien que l?eau soit généralement un bon solvant, un grand groupe de molécules, appelées molécules hydrophobes (du grecque "hydro"="eau" et "phobia"="peur"), n'est pas facilement soluble dans l'eau. Ces particules hydrophobes essayent d'éviter le contact avec l'eau, et forment donc un agrégat pour minimiser leur surface exposée à l'eau. Cette force entre les particules est appelée interaction hydrophobe, et les mécanismes physiques qui conduisent à ces interactions ne sont pas bien compris à l'heure actuelle. Dans mon étude j'ai décrit l'effet des particules hydrophobes sur l'eau liquide. L'objectif était d'éclaircir le mécanisme de l'interaction hydrophobe qui est fondamentale pour la formation des membranes et le fonctionnement des processus biologiques dans notre corps. Récemment, l'eau liquide a été décrite comme un réseau aléatoire formé par des liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure, certaines liaisons hydrogènes sont détruites tandis que les molécules d'eau s'arrangent autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l'eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise la structure de cage autour des particules hydrophobes. Maintenant, la dissolution des particules devient défavorable, et les particules se séparent de l'eau en formant deux phases. A très haute température, les mouvements thermiques dans le système deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d'eau. A l'aide d'un modèle qui décrit le système en termes de restructuration dans l'eau liquide, j'ai réussi à reproduire les phénomènes physiques liés à l?hydrophobicité. J'ai démontré que les interactions hydrophobes entre plusieurs particules peuvent être exprimées dans un modèle qui prend uniquement en compte les liaisons hydrogènes entre les molécules d'eau. Encouragée par ces résultats prometteurs, j'ai inclus dans mon modèle des substances fréquemment utilisées pour stabiliser ou déstabiliser des solutions aqueuses de particules hydrophobes. J'ai réussi à reproduire les effets dûs à la présence de ces substances. De plus, j'ai pu décrire la formation de micelles par des particules amphiphiles comme des lipides dont la surface est partiellement hydrophobe et partiellement hydrophile ("hydro-phile"="aime l'eau"), ainsi que le repliement des protéines dû à l'hydrophobicité, qui garantit le fonctionnement correct des processus biologiques de notre corps. Dans mes études futures je poursuivrai l'étude des solutions aqueuses de différentes particules en utilisant les techniques acquises pendant mon travail de thèse, et en essayant de comprendre les propriétés physiques du liquide le plus important pour notre vie : l'eau.