954 resultados para Developed applications
Resumo:
The Universitat Oberta de Catalunya (Open University of Catalonia, UOC) is an online university that makes extensive use of information and communication technologies to provide education. Ever since its establishment in 1995, the UOC has developed and tested methodologies and technological support services to meet the educational challenges posed by its student community and its teaching and management staff. The know-how it has acquired in doing so is the basis on which it has created the Open Apps platform, which is designed to provide access to open source technical applications, information on successful learning and teaching experiences, resources and other solutions, all in a single environment. Open Apps is an open, online catalogue, the content of which is available to all students for learning purposes, all IT professionals for downloading and all teachers for reusing.To contribute to the transfer of knowledge, experience and technology, each of the platform¿s apps comes with full documentation, plus information on cases in which it has been used and related tools. It is hoped that such transfer will lead to the growth of an external partner network, and that this, in turn, will result in improvements to the applications and teaching/learning practices, and in greater scope for collaboration.Open Apps is a strategic project that has arisen from the UOC's commitment to the open access movement and to giving knowledge and technology back to society, as well as its firm belief that sustainability depends on communities of interest.
Resumo:
The implementation of new techniques of imaging in the daily practice of the radiation oncologist is a major advance in these last 10 years. This allows optimizing the therapeutic intervals and locoregional control of the disease while limiting side effects. Among them, positron emission tomography (PET) offers an opportunity to the clinician to obtain data relative to the tumoral biological mechanisms, while benefiting from the morphological images of the computed tomography (CT) scan. Recently hybrid PET/CT has been developed and numerous studies aimed at optimizing its use in the planning, the evaluation of the treatment response and the prognostic value. The choice of the radiotracer (according to the type of cancer and to the studied biological mechanism) and the various methods of tumoral delineation, require a regular update to optimize the practices. We propose throughout this article, an exhaustive review of the published researches (and in process of publication) until December 2011, as user guide of PET/CT in all the aspects of the modern radiotherapy (from the diagnosis to the follow-up): biopsy guiding, optimization of treatment planning and dosimetry, evaluation of tumor response and prognostic value, follow-up and early detection of recurrence versus tumoral necrosis. In a didactic purpose, each of these aspects is approached by primary tumoral location, and illustrated with representative iconographic examples. The current contribution of PET/CT and its perspectives of development are described to offer to the radiation oncologist a clear and up to date reading in this expanding domain.
Resumo:
Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly
Resumo:
The present dissertation is devoted to the systematic approach to the development of organic toxic and refractory pollutants abatement by chemical decomposition methods in aqueous and gaseous phases. The systematic approach outlines the basic scenario of chemical decomposition process applications with a step-by-step approximation to the most effective result with a predictable outcome for the full-scale application, confirmed by successful experience. The strategy includes the following steps: chemistry studies, reaction kinetic studies in interaction with the mass transfer processes under conditions of different control parameters, contact equipment design and studies, mathematical description of the process for its modelling and simulation, processes integration into treatment technology and its optimisation, and the treatment plant design. The main idea of the systematic approach for oxidation process introduction consists of a search for the most effective combination between the chemical reaction and the treatment device, in which the reaction is supposed to take place. Under this strategy,a knowledge of the reaction pathways, its products, stoichiometry and kinetics is fundamental and, unfortunately, often unavailable from the preliminary knowledge. Therefore, research made in chemistry on novel treatment methods, comprisesnowadays a substantial part of the efforts. Chemical decomposition methods in the aqueous phase include oxidation by ozonation, ozone-associated methods (O3/H2O2, O3/UV, O3/TiO2), Fenton reagent (H2O2/Fe2+/3+) and photocatalytic oxidation (PCO). In the gaseous phase, PCO and catalytic hydrolysis over zero valent ironsare developed. The experimental studies within the described methodology involve aqueous phase oxidation of natural organic matter (NOM) of potable water, phenolic and aromatic amino compounds, ethylene glycol and its derivatives as de-icing agents, and oxygenated motor fuel additives ¿ methyl tert-butyl ether (MTBE) ¿ in leachates and polluted groundwater. Gas-phase chemical decomposition includes PCO of volatile organic compounds and dechlorination of chlorinated methane derivatives. The results of the research summarised here are presented in fifteenattachments (publications and papers submitted for publication and under preparation).
Resumo:
During the latest few years the need for new motor types has grown, since both high efficiency and an accurate dynamic performance are demanded in industrial applications. For this reason, new effective control systems such as direct torque control (DTC) have been developed. Permanent magnet synchronous motors (PMSM) are well suitable for new adjustable speed AC inverter drives, because their efficiency and power factor are not depending on the pole pair number and speed to the same extent as it is the case in induction motors. Therefore, an induction motor (IM) with a mechanical gearbox can often be replaced with a direct PM motor drive. Space as well as costs will be saved, because the efficiency increases and the cost of maintenance decreases as well. This thesis deals with design criterion, analytical calculation and analysis of the permanent magnet synchronous motor for both sinusoidal air-gap flux density and rectangular air-gapflux density. It is examined how the air-gap flux, flux densities, inductances and torque can be estimated analytically for salient pole and non-salient pole motors. It has been sought by means of analytical calculations for the ultimate construction for machines rotating at relative low 300 rpm to 600 rpm speeds, which are suitable speeds e.g. in Pulp&Paper industry. The calculations are verified by using Finite Element calculations and by measuring of prototype motor. The prototype motor is a 45 kW, 600 rpm PMSM with buried V-magnets, which is a very appropriate construction for high torque motors with a high performance. With the purposebuilt prototype machine it is possible not only to verify the analytical calculations but also to show whether the 600 rpm PMSM can replace the 1500 rpm IM with a gear. It can also be tested if the outer dimensions of the PMSM may be the same as for the IM and if the PMSM in this case can produce a 2.5 fold torque, in consequence of which it may be possible to achieve the same power. The thesis also considers the question how to design a permanent magnet synchronous motor for relatively low speed applications that require a high motor torqueand efficiency as well as bearable costs of permanent magnet materials. It is shown how a selection of different parameters affects the motor properties. Key words: Permanent magnet synchronous motor, PMSM, surface magnets, buried magnets
Resumo:
Mobiililaitteisiin tehdyt sovellukset ovat nykyään laajassa käytössä. Mobiilisovellukset tarjoavat käyttäjälleen usein tietyn ennalta määritellyn toiminnallisuuden eivätkä ne pysty mukautumaan vaihtelevaan käyttöympäristöönsä. Jos sovellus olisi tietoinen käyttöympäristöstään ja sen muutoksista, se voisi tarjota käyttäjälleen tilanteeseen sopivia ominaisuuksia. Käyttöympäristöstään tietoiset hajautetut sovellukset tarvitsevat kuitenkin huomattavasti perinteisiä sovelluksia monimutkaisemman arkkitehtuurin toimiakseen. Tässä työssä esitellään hajautetuille ja kontekstitietoisille sovelluksille tarkoitettu ohjelmistoarkkitehtuuri. Työ perustuu Oulun yliopiston CAPNET-tutkimusprojektissa kehitettyyn, mobiilisovelluksille tarkoitettuun arkkitehtuuriin. Tämän työn tarkoituksena on tarjota ratkaisuja niihin puutteisiin, jotka tulivat esille CAPNET-arkkitehtuurin kehitys- ja testausvaiheessa. Esimerkiksi arkkitehtuurin komponenttien määrittelyä tulisi tarkentaa ja ne tulisi jakaa horisontaalisiin kerroksiin niiden ominaisuuksien ja alustariippuvuuden mukaisesti. Työssä luodaan katsaus olemassa oleviin teknologioihin jotka tukevat hajautettujen ja kontekstitietoisten järjestelmien kehittämistä. Myös niiden soveltumista CAPNET-arkkitehtuuriin analysoidaan. Työssä esitellään CAPNET-arkkitehtuuri ja ehdotetaan uutta arkkitehtuuria ja komponenttien kerrosjaottelua. Ehdotuksessa arkkitehtuurin komponentit ja järjestelmän rakenne määritellään ja mallinnetaan UML-menetelmällä. Työn tuloksena on arkkitehtuurimäärittely, joka jakaa nykyisen arkkitehtuurin komponentit kerroksiin. Komponenttien rajapinnat on määritelty selkeästi ja tarkasti. Työ tarjoaa myös projektiryhmälle hyvän lähtökohdan uuden arkkitehtuurin suunnittelulle ja toteuttamiselle.
Resumo:
Tässä tutkimuksessa kehitettiin prototyyppi betonielementin dimension mittaus järjestelmästä. Tämä järjestelmä mahdollistaa kolmiulotteisen kappaleen mittauksen. Tutkimuksessa kehitettiin myös stereonäköön perustuva kappaleen mittaus. Prototyyppiä testailin ja tulokset osoittautuivat luotettaviksi. Tutkimuksessa selvitetään ja vertaillaan myös muita lähestymistapoja ja olemassa olevia järjestelmiä kappaleen kolmiuloitteiseen mittaukseen, joita Suomalaiset yhtiöt käyttävät tällä alalla.
Resumo:
Sähköisen liiketoiminnan sovelluksia on voitu toistaiseksi käyttää useimmissa tapauksissa vain langallisen yhteyden kautta. Uudet langattomat teknologiat, jotka ovat kehittyneet nopeasti muutaman viimeisen vuoden aikana, mahdollistavat näiden sovellusten käytön ajasta ja paikasta riippumatta. Tämän työn tavoitteena oli tutkia langattomien sähköisen liiketoiminnan sovellusten käyttöä ja hyötyjä tieto- ja viestintäteollisuudessa. Työssä keskitytään tutkimaan tätä tietotekniikan evoluutioaskelta yksittäisen yrityksen kannalta: rajoittuen omassa toiminnassa käytettäviin sovelluksiin. Tutkimus luo viitekehyksen mobiilisuuden evoluutioon, uuden tietotekniikan vaikutuksiin ja hyötyihin sekä tarkemmin langattomiin sähköisen liiketoiminnan sovelluksiin. Tätä viitekehystä käytetään analysoitaessa nykyistä käyttöä tutkimuksen kohteena olevissa yrityksissä. Tutkimuksen johtopäätökset niin nykyisestä käytöstä kuin myös tulevasta ovat syntyneet viitekehyksen, nykyisen käytön, sekä tehtyjen haastattelujen pohjalta.
Resumo:
Sähköisen liiketoiminnan ja mobiliteetin konvergenssi yhdessä teknologisen innovaation kiihtyvän vauhdin kanssa ovat saaneet aikaan kiinnostusta langattomia liiketoimintaratkaisuja kohtaan. Tämän diplomityön tavoitteena oli tutkia sähköisen liiketoiminnan langattomien sovellusten arviointi- ja kehitysprosessia. Työ keskittyy tarkastelemaan paperiteollisuuden toimitusketjun langatonta seurantaa. Tutkimuksessa esitetään langattoman sähköisen liiketoiminnan määritelmä, kuvaillaan langattomuuden eri sovellusalueita ja sovellusten arviointi- ja kehitysprosessin strategisia sekä teknologisia ulottuvuuksia. Työ luo viitekehyksen, jonka avulla tarkastella langattomien teknologioiden merkitystä logistiikassa. Tutkimuksen merkittävin tulos on prosessimalli sovellusten arvioimiseksi ja kehittämiseksi. Mallilla kehitetty langaton sovellus osoittautui tarkastelussa hyödylliseksi toimitusketjun hallinnassa.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.
Resumo:
In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.
Resumo:
The purpose of this dissertation is to analyse older consumers' adoption of information and communication technology innovations, assess the effect of aging related characteristic, and evaluate older consumers' willingness to apply these technologies in health care services. This topic is considered important, because the population in Finland (as in other welfare states) is aging and thus offers a possibility for marketers, but on the other hand threatens society with increasing costs for healthcare. Innovation adoption has been under research from several aspects in both organizational and consumer research. In the consumer behaviour, several theories have been developed to predict consumer responses to innovation. The present dissertation carefully reviews previous research and takes a closer look at the theory of planned behaviour, technology acceptance model and diffusion of innovations perspective. It is here suggested that there is a possibility that these theories can be combined and complemented to predict the adoption of ICT innovations among aging consumers, taking the aging related personal characteristics into account. In fact, there are very few studies that have concentrated on aging consumers in the innovation research, and thus there was a clear indent for the present research. ICT in the health care context has been studied mainly from the organizational point of view. If the technology is thus applied for the communication between the individual end-user and service provider, the end-user cannot be shrugged off. The present dissertation uses empirical evidence from a survey targeted to 55-79 year old people from one city in Southern-Carelia. The empirical analysis of the research model was mainly based on structural equation modelling that has been found very useful on estimating causal relationships. The tested models were targeted to predict the adoption stage of personal computers and mobile phones, and the adoption intention of future health services that apply these devices for communication. The present dissertation succeeded in modelling the adoption behaviour of mobile phones and PCs as well as adoption intentions of future services. Perceived health status and three components behind it (depression, functional ability, and cognitive ability) were found to influence perception of technology anxiety. Better health leads to less anxiety. The effect of age was assessed as a control variable, in order to evaluate its effect compared to health characteristics. Age influenced technology perceptions, but to lesser extent compared to health. The analyses suggest that the major determinant for current technology adoption is perceived behavioural control, and additionally technology anxiety that indirectly inhibit adoption through perceived control. When focusing on future service intentions, the key issue is perceived usefulness that needs to be highlighted when new services are launched. Besides usefulness, the perception of online service reliability is important and affects the intentions indirectly. To conclude older consumers' adoption behaviour is influenced by health status and age, but also by the perceptions of anxiety and behavioural control. On the other hand, launching new types of health services for aging consumers is possible after the service is perceived reliable and useful.
Resumo:
The level of health care in Russia is mostly still below the western standards, but lately it has been developing quite positively. Many ICT solutions (telemedicine applications) have been developed for health care in Finland, but since the domestic market is so small, it’s necessary to expand to foreign markets to make the Finnish R&D projects more profitable. Telemedicine applications are not yet widely used in Russia, but since the health care system is going through fast changes, leapfrog effects can be expected and new modern applications and technologies will be implemented. This will open numerous business opportunities for Finnish technology developers. This thesis aims to be the first evaluation of the market and form an outlook of the health care system and telemedicine applications already utilized in Russia. The results of this study can be used for focusing further research ultimately aiming at technology implementation. The study showed that there is potential for many types of telemedicine solutions, e.g. electronic patient records and home monitoring systems; providing that further research in this field is needed.