907 resultados para Models and Methods
Resumo:
PURPOSE: To improve the risk stratification of patients with rhabdomyosarcoma (RMS) through the use of clinical and molecular biologic data. PATIENTS AND METHODS: Two independent data sets of gene-expression profiling for 124 and 101 patients with RMS were used to derive prognostic gene signatures by using a meta-analysis. These and a previously published metagene signature were evaluated by using cross validation analyses. A combined clinical and molecular risk-stratification scheme that incorporated the PAX3/FOXO1 fusion gene status was derived from 287 patients with RMS and evaluated. RESULTS: We showed that our prognostic gene-expression signature and the one previously published performed well with reproducible and significant effects. However, their effect was reduced when cross validated or tested in independent data and did not add new prognostic information over the fusion gene status, which is simpler to assay. Among nonmetastatic patients, patients who were PAX3/FOXO1 positive had a significantly poorer outcome compared with both alveolar-negative and PAX7/FOXO1-positive patients. Furthermore, a new clinicomolecular risk score that incorporated fusion gene status (negative and PAX3/FOXO1 and PAX7/FOXO1 positive), Intergroup Rhabdomyosarcoma Study TNM stage, and age showed a significant increase in performance over the current risk-stratification scheme. CONCLUSION: Gene signatures can improve current stratification of patients with RMS but will require complex assays to be developed and extensive validation before clinical application. A significant majority of their prognostic value was encapsulated by the fusion gene status. A continuous risk score derived from the combination of clinical parameters with the presence or absence of PAX3/FOXO1 represents a robust approach to improving current risk-adapted therapy for RMS.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
OBJECTIVE: To evaluate the variability of bond strength test results of adhesive systems (AS) and to correlate the results with clinical parameters of clinical studies investigating cervical restorations. MATERIALS AND METHODS: Regarding the clinical studies, the internal database which had previously been used for a meta-analysis on cervical restorations was updated with clinical studies published between 2008 and 2012 by searching the PubMed and SCOPUS databases. PubMed and the International Association for Dental Research abstracts online were searched for laboratory studies on microtensile, macrotensile and macroshear bond strength tests. The inclusion criteria were (1) dentin, (2) testing of at least four adhesive systems, (3) same diameter of composite and (4) 24h of water storage prior to testing. The clinical outcome variables were retention loss, marginal discoloration, detectable margins, and a clinical index comprising the three parameters by weighing them. Linear mixed models which included a random study effect were calculated for both, the laboratory and the clinical studies. The variability was assessed by calculating a ratio of variances, dividing the variance among the estimated bonding effects obtained in the linear mixed models by the sum of all variance components estimated in these models. RESULTS: Thirty-two laboratory studies fulfilled the inclusion criteria comprising 183 experiments. Of those, 86 used the microtensile test evaluating 22 adhesive systems (AS). Twenty-seven used the macrotensile test with 17 AS, and 70 used the macroshear test with 24 AS. For 28 AS the results from clinical studies were available. Microtensile and macrotensile (Spearman rho=0.66, p=0.007) were moderately correlated and also microtensile and macroshear (Spearman rho=0.51, p=0.03) but not macroshear and macrotensile (Spearman rho=0.34, p=0.22). The effect of the adhesive system was significant for microtensile and macroshear (p<0.001) but not for macrotensile. The effect of the adhesive system could explain 36% of the variability of the microtensile test, 27% of the macrotensile and 33% of the macroshear test. For the clinical trials, about 49% of the variability of retained restorations could be explained by the adhesive system. With respect to the correlation between bond strength tests and clinical parameters, only a moderate correlation between micro- and macrotensile test results and marginal discoloration was demonstrated. However, no correlation between these tests and a retention loss or marginal integrity was shown. The correlation improved when more studies were included compared to assessing only one study. SIGNIFICANCE: The high variability of bond strength test results highlights the need to establish individual acceptance levels for a given test institute. The weak correlation of bond-strength test results with clinical parameters leads to the conclusion that one should not rely solely on bond strength tests to predict the clinical performance of an adhesive system but one should conduct other laboratory tests like tests on the marginal adaptation of fillings in extracted teeth and the retention loss of restorations in non-retentive cavities after artificial aging.
Resumo:
PURPOSE: To determine whether a mono-, bi- or tri-exponential model best fits the intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) signal of normal livers. MATERIALS AND METHODS: The pilot and validation studies were conducted in 38 and 36 patients with normal livers, respectively. The DWI sequence was performed using single-shot echoplanar imaging with 11 (pilot study) and 16 (validation study) b values. In each study, data from all patients were used to model the IVIM signal of normal liver. Diffusion coefficients (Di ± standard deviations) and their fractions (fi ± standard deviations) were determined from each model. The models were compared using the extra sum-of-squares test and information criteria. RESULTS: The tri-exponential model provided a better fit than both the bi- and mono-exponential models. The tri-exponential IVIM model determined three diffusion compartments: a slow (D1 = 1.35 ± 0.03 × 10(-3) mm(2)/s; f1 = 72.7 ± 0.9 %), a fast (D2 = 26.50 ± 2.49 × 10(-3) mm(2)/s; f2 = 13.7 ± 0.6 %) and a very fast (D3 = 404.00 ± 43.7 × 10(-3) mm(2)/s; f3 = 13.5 ± 0.8 %) diffusion compartment [results from the validation study]. The very fast compartment contributed to the IVIM signal only for b values ≤15 s/mm(2) CONCLUSION: The tri-exponential model provided the best fit for IVIM signal decay in the liver over the 0-800 s/mm(2) range. In IVIM analysis of normal liver, a third very fast (pseudo)diffusion component might be relevant. KEY POINTS: ? For normal liver, tri-exponential IVIM model might be superior to bi-exponential ? A very fast compartment (D = 404.00 ± 43.7 × 10 (-3) mm (2) /s; f = 13.5 ± 0.8 %) is determined from the tri-exponential model ? The compartment contributes to the IVIM signal only for b ≤ 15 s/mm (2.)
Resumo:
BACKGROUND. So far few studies have focused on the last steps of drug-use trajectories. Heroin has been described as a final stage, but the non-medical use of prescription opioids (NMUPOs) is often associated with heroin use. There is, however, no consensus yet about which one precedes the other. AIMS. The objective of this study was to test which of these two substances was likely to be induced by the other using a prospective design. MATERIAL AND METHODS. We used data from the Swiss Longitudinal Cohort Study on Substance Use Risk Factors (C-SURF) to assess exposure to heroin and NMUPO at two times points (N = 5,041). Cross-lagged panel models provided evidence regarding prospective pathways between heroin and NMUPOs. Power analyses provided evidence about significance and clinical relevance. RESULTS. Results showed that heroin use predicted later NMUPO use (? = 1.217, p < 0.001) and that the reverse pathway was non-significant (? = 0.240, p = .233). Heroin use seems to be an important determinant, causing a 150% risk increase for NMUPO use at follow-up, whereas NMUPO use at baseline increases the risk of heroin use at follow-up by a mere non-significant 20%. CONCLUSIONS. Thus, heroin users were more likely to move to NMUPOs than non-heroin users, whereas NMUPO users were not likely to move to heroin use. The pathway of substance use seemed to include first heroin use, then NMUPO use.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
Tämän tutkimuksen tavoitteena oli tutkia langattomien internet palveluiden arvoverkkoa ja liiketoimintamalleja. Tutkimus oli luonteeltaan kvalitatiivinen ja siinä käytettiin strategiana konstruktiivista case-tutkimusta. Esimerkkipalveluna oli Treasure Hunters matkapuhelinpeli. Tutkimus muodostui teoreettisesta ja empiirisestä osasta. Teoriaosassa liitettiin innovaatio, liiketoimintamallit ja arvoverkko käsitteellisesti toisiinsa, sekä luotiin perusta liiketoimintamallien kehittämiselle. Empiirisessä osassa keskityttiin ensin liiketoimintamallien luomiseen kehitettyjen innovaatioiden pohjalta. Lopuksi pyrittiin määrittämään arvoverkko palvelun toteuttamiseksi. Tutkimusmenetelminä käytettiin innovaatiosessiota, haastatteluja ja lomakekyselyä. Tulosten pohjalta muodostettiin useita liiketoimintakonsepteja sekä kuvaus arvoverkon perusmallista langattomille peleille. Loppupäätelmänä todettiin että langattomat palvelut vaativat toteutuakseen useista toimijoista koostuvan arvoverkon.
Resumo:
BACKGROUND: The recent large randomized controlled trial of glutamine and antioxidant supplementation suggested that high-dose glutamine is associated with increased mortality in critically ill patients with multiorgan failure. The objectives of the present analyses were to reevaluate the effect of supplementation after controlling for baseline covariates and to identify potentially important subgroup effects. MATERIALS AND METHODS: This study was a post hoc analysis of a prospective factorial 2 × 2 randomized trial conducted in 40 intensive care units in North America and Europe. In total, 1223 mechanically ventilated adult patients with multiorgan failure were randomized to receive glutamine, antioxidants, both glutamine and antioxidants, or placebo administered separate from artificial nutrition. We compared each of the 3 active treatment arms (glutamine alone, antioxidants alone, and glutamine + antioxidants) with placebo on 28-day mortality. Post hoc, treatment effects were examined within subgroups defined by baseline patient characteristics. Logistic regression was used to estimate treatment effects within subgroups after adjustment for baseline covariates and to identify treatment-by-subgroup interactions (effect modification). RESULTS: The 28-day mortality rates in the placebo, glutamine, antioxidant, and combination arms were 25%, 32%, 29%, and 33%, respectively. After adjusting for prespecified baseline covariates, the adjusted odds ratio of 28-day mortality vs placebo was 1.5 (95% confidence interval, 1.0-2.1, P = .05), 1.2 (0.8-1.8, P = .40), and 1.4 (0.9-2.0, P = .09) for glutamine, antioxidant, and glutamine plus antioxidant arms, respectively. In the post hoc subgroup analysis, both glutamine and antioxidants appeared most harmful in patients with baseline renal dysfunction. No subgroups suggested reduced mortality with supplements. CONCLUSIONS: After adjustment for baseline covariates, early provision of high-dose glutamine administered separately from artificial nutrition was not beneficial and may be associated with increased mortality in critically ill patients with multiorgan failure. For both glutamine and antioxidants, the greatest potential for harm was observed in patients with multiorgan failure that included renal dysfunction upon study enrollment.
Resumo:
Development of research methods requires a systematic review of their status. This study focuses on the use of Hierarchical Linear Modeling methods in psychiatric research. Evaluation includes 207 documents published until 2007, included and indexed in the ISI Web of Knowledge databases; analyses focuses on the 194 articles in the sample. Bibliometric methods are used to describe the publications patterns. Results indicate a growing interest in applying the models and an establishment of methods after 2000. Both Lotka"s and Bradford"s distributions are adjusted to the data.
Resumo:
OBJECTIVE: Studies suggest that smoking may be a risk factor for the development of microvascular complications such as diabetic peripheral neuropathy (DPN). The objective of this study was to assess the relationship between smoking and DPN in persons with type 1 or type 2 diabetes. RESEARCH DESIGN AND METHODS: A systematic review of the PubMed, Embase, and Cochrane clinical trials databases was conducted for the period from January 1966 to November 2014 for cohort, cross-sectional and case-control studies that assessed the relationship between smoking and DPN. Separate meta-analyses for prospective cohort studies and case-control or cross-sectional studies were performed using random effects models. RESULTS: Thirty-eight studies (10 prospective cohort and 28 cross-sectional) were included. The prospective cohort studies included 5558 participants without DPN at baseline. During follow-up ranging from 2 to 10 years, 1550 cases of DPN occurred. The pooled unadjusted odds ratio (OR) of developing DPN associated with smoking was 1.26 (95% CI 0.86-1.85; I(2) = 74%; evidence grade: low strength). Stratified analyses of the prospective studies revealed that studies of higher quality and with better levels of adjustment and longer follow-up showed a significant positive association between smoking and DPN, with less heterogeneity. The cross-sectional studies included 27,594 participants. The pooled OR of DPN associated with smoking was 1.42 (95% CI 1.21-1.65; I(2) = 65%; evidence grade: low strength). There was no evidence of publication bias. CONCLUSIONS: Smoking may be associated with an increased risk of DPN in persons with diabetes. Further studies are needed to test whether this association is causal and whether smoking cessation reduces the risk of DPN in adults with diabetes.
Resumo:
Three-dimensional reconstruction of reservoir analogues can be improved combining data from different geophysical methods. Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) data are valuable tools, since they provide subsurface information from internal architecture and facies distribution of sedimentary rock bodies, enabling the upgrading of depositional models and heterogeneity reconstruction. The Lower Eocene Roda Sandstone is a well-known deltaic complex widely studied as a reservoir analogue that displays a series of sandstone wedges with a general NE to SW progradational trend. To provide a better understanding of internal heterogeneity of a 10m-thick progradational delta-front sandstone unit, 3D GPR data were acquired. In addition, common midpoints (CMP) to measure the sandstone subsoil velocity, test profiles with different frequency antennas (25, 50 and 100MHz) and topographic data for subsequent correction in the geophysical data were also obtained. Three ERT profiles were also acquired to further constrain GPR analysis. These geophysical results illustrate the geometry of reservoir analogue heterogeneities both depositional and diagenetic in nature, improving and complementing previous outcrop-derived data. GPR interpretation using radar stratigraphy principles and attributes analysis provided: 1)tridimensional geometry of major stratigraphic surfaces that define four units in the GPR Prism, 2) image the internal architecture of the units and their statistical study of azimuth and dips, useful for a quick determination of paleocurrent directions. These results were used to define the depositional architecture of the progradational sandbody that shows an arrangement in very-high-frequency sequences characterized by clockwise paleocurrent variations and decrease of the sedimentary flow, similar to those observed at a greater scale in the same system. This high-frequency sequential arrangement has been attributed to the autocyclic dynamics of a supply-dominated delta- front where fluvial and tidal currents are in competition. The resistivity models enhanced the viewing of reservoir quality associated with cement distribution caused by depositional and early diagenetic processes related to the development of transgressive and regressive systems tracts in igh-frequency sequences.
Resumo:
The use of private funding and management is enjoying an increasing trend in airports. The literature has not paid enough attention to the mixed management models in this industry, although many European airports take the form of mixed public-private companies, where ownership is shared between public and private sectors. We examine the determinants of the degree of private participation in the European airport sector. Drawing on a sample of the 100 largest European airports, we estimate a multivariate equation in order to determine the role of airport characteristics, fiscal variables, and political factors on the extent of private involvement. Our results confirm the alignment between public and private interests in partially privatized airports. Fiscal constraints and market attractiveness promote private participation. Integrated governance models and the share of network carriers prevent the presence of private ownership, while the degree of private participation appears to be pragmatic rather than ideological.