935 resultados para Enterprise application integration (Computer systems)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sistema de detecció d'incendis mitjançant una xarxa de sensors inalàmbrics que mostregen la temperatura ambient i envien aquestes dades a un PC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objeto último de este proyecto consiste en establecer una interfaz que permita que un dispositivo embebido cualquiera, con capacidad para conectarse a una red de datos, pueda describir los actuadores y sensores de los que dispone y ponerlos al servicio de un usuario u otro sistema. Como prueba de concepto se diseñará e implementará una aplicación para teléfonos móviles capaz de hacer uso de esta interfaz para controlar cualquier dispositivo que implemente dicha especificación al que pueda conectarse. También se diseñará y fabricará un dispositivo que pueda controlarse haciendo uso del marco de trabajo propuesto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Our goal was to know the web contents and examine the technical information pest control services available to users through their webpages. Method: A total of 70 webpages from biocides services in the province of Málaga (Spain) were analyzed. We used 15 evaluation indicators grouped into 5 parameters relating to data of the service provider; information’s reliability and services; accuracy of content and writing style; technical resources and interaction with the users. As test instruments were used sectoral legislation, official records of products and deliveries, standards and technical guides. Results: Companies showed a remarkable degree of awareness with the implementation and use of new technologies. Aspects negative that they can have an impact on the confidence of users, relating to the reliability of the information and deficiencies associated with the description of the services portfolio and credentials of the companies were identified. The integration and use of collaborative platforms 2.0 was poorly developed and squandered. Discussion: It is possible to improve the trust of users intervening in those aspects that affect the reliability of the information provided on the web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Skin patch test is the gold standard method in diagnosing contact allergy. Although used for more than 100 years, the patch test procedure is performed with variability around the world. A number of factors can influence the test results, namely the quality of reagents used, the timing of the application, the patch test series (allergens/haptens) that have been used for testing, the appropriate interpretation of the skin reactions or the evaluation of the patient's benefit. METHODS We performed an Internet -based survey with 38 questions covering the educational background of respondents, patch test methods and interpretation. The questionnaire was distributed among all representatives of national member societies of the World Allergy Organization (WAO), and the WAO Junior Members Group. RESULTS One hundred sixty-nine completed surveys were received from 47 countries. The majority of participants had more than 5 years of clinical practice (61 %) and routinely carried out patch tests (70 %). Both allergists and dermatologists were responsible for carrying out the patch tests. We could observe the use of many different guidelines regardless the geographical distribution. The use of home-made preparations was indicated by 47 % of participants and 73 % of the respondents performed 2 or 3 readings. Most of the responders indicated having patients with adverse reactions, including erythroderma (12 %); however, only 30 % of members completed a consent form before conducting the patch test. DISCUSSION The heterogeneity of patch test practices may be influenced by the level of awareness of clinical guidelines, different training backgrounds, accessibility to various types of devices, the patch test series (allergens/haptens) used for testing, type of clinical practice (public or private practice, clinical or research-based institution), infrastructure availability, financial/commercial implications and regulations among others. CONCLUSION There is a lack of a worldwide homogeneity of patch test procedures, and this raises concerns about the need for standardization and harmonization of this important diagnostic procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

THESIS ABSTRACT : Low-temperature thermochronology relies on application of radioisotopic systems whose closure temperatures are below temperatures at which the dated phases are formed. In that sense, the results are interpreted as "cooling ages" in contrast to "formation ages". Owing to the low closure-temperatures, it is possible to reconstruct exhumation and cooling paths of rocks during their residence at shallow levels of the crust, i.e. within first ~10 km of depth. Processes occurring at these shallow depths such as final exhumation, faulting and relief formation are fundamental for evolution of the mountain belts. This thesis aims at reconstructing the tectono-thermal history of the Aar massif in the Central Swiss Alps by means of zircon (U-Th)/He, apatite (U-Th)/He and apatite fission track thermochronology. The strategy involved acquisition of a large number of samples from a wide range of elevations in the deeply incised Lötschen valley and a nearby NEAT tunnel. This unique location allowed to precisely constrain timing, amount and mechanisms of exhumation of the main orographic feature of the Central Alps, evaluate the role of topography on the thermochronological record and test the impact of hydrothermal activity. Samples were collected from altitudes ranging between 650 and 3930 m and were grouped into five vertical profiles on the surface and one horizontal in the tunnel. Where possible, all three radiometric systems were applied to each sample. Zircon (U-Th)/He ages range from 5.1 to 9.4 Ma and are generally positively correlated with altitude. Age-elevation plots reveal a distinct break in slope, which translates into exhumation rate increasing from ~0.4 to ~3 km/Ma at 6 Ma. This acceleration is independently confirmed by increased cooling rates on the order of 100°C/Ma constrained on the basis of age differences between the zircon (U-Th)/He and the remaining systems. Apatite fission track data also plot on a steep age-elevation curve indicating rapid exhumation until the end of the Miocene. The 6 Ma event is interpreted as reflecting tectonically driven uplift of the Aar massif. The late Miocene timing implies that the increase of precipitation in the Pliocene did not trigger rapid exhumation in the Aar massif. The Messinian salinity crisis in the Mediterranean could not directly intensify erosion of the Aar but associated erosional output from the entire Alps may have tapered the orogenic wedge and caused reactivation of thrusting in the Aar massif. The high exhumation rates in the Messinian were followed by a decrease to ~1.3 km/Ma as evidenced by ~8 km of exhumation during last 6 Ma. The slowing of exhumation is also apparent from apatite (U-Th)1He age-elevation data in the northern part of the Lötschen valley where they plot on a ~0.5km/Ma line and range from 2.4 to 6.4 Ma However, from the apatite (U-Th)/He and fission track data from the NEAT tunnel, there is an indication of a perturbation of the record. The apatite ages are youngest under the axis of the valley, in contrast to an expected pattern where they would be youngest in the deepest sections of the tunnel due to heat advection into ridges. The valley however, developed in relatively soft schists while the ridges are built of solid granitoids. In line with hydrological observations from the tunnel, we suggest that the relatively permeable rocks under the valley floor, served as conduits of geothermal fluids that caused reheating leading to partial Helium loss and fission track annealing in apatites. In consequence, apatite ages from the lowermost samples are too young and the calculated exhumation rates may underestimate true values. This study demonstrated that high-density sampling is indispensable to provide meaningful thermochronological data in the Alpine setting. The multi-system approach allows verifying plausibility of the data and highlighting sources of perturbation. RÉSUMÉ DE THÈSE : La thermochronologie de basse température dépend de l'utilisation de systèmes radiométriques dont la température de fermeture est nettement inférieure à la température de cristallisation du minéral. Les résultats obtenus sont par conséquent interprétés comme des âges de refroidissement qui diffèrent des âges de formation obtenus par le biais d'autres systèmes de datation. Grâce aux températures de refroidissement basses, il est aisé de reconstruire les chemins de refroidissement et d'exhumation des roches lors de leur résidence dans la croute superficielle (jusqu'à 10 km). Les processus qui entrent en jeu à ces faibles profondeurs tels que l'exhumation finale, la fracturation et le faillage ainsi que la formation du relief sont fondamentaux dans l'évolution des chaînes de montagne. Ces dernières années, il est devenu clair que l'enregistrement thermochronologique dans les orogènes peut être influencé par le relief et réinitialisé par l'advection de la chaleur liée à la circulation de fluides géothermaux après le refroidissement initial. L'objectif de cette thèse est de reconstruire l'histoire tectono-thermique du massif de l'Aar dans les Alpes suisses Centrales à l'aide de trois thermochronomètres; (U-Th)/He sur zircon, (U-Th)/He sur apatite et les traces de fission sur apatite. Afin d'atteindre cet objectif, nous avons récolté un grand nombre d'échantillons provenant de différentes altitudes dans la vallée fortement incisée de Lötschental ainsi que du tunnel de NEAT. Cette stratégie d'échantillonnage nous a permis de contraindre de manière précise la chronologie, les quantités et les mécanismes d'exhumation de cette zone des Alpes Centrales, d'évaluer le rôle de la topographie sur l'enregistrement thermochronologique et de tester l'impact de l'hydrothermalisme sur les géochronomètres. Les échantillons ont été prélevés à des altitudes comprises entre 650 et 3930m selon 5 profils verticaux en surface et un dans le tunnel. Quand cela à été possible, les trois systèmes radiométriques ont été appliqués aux échantillons. Les âges (U-Th)\He obtenus sur zircons sont compris entre 5.l et 9.4 Ma et sont corrélés de manière positive avec l'altitude. Les graphiques représentant l'âge et l'élévation montrent une nette rupture de la pente qui traduisent un accroissement de la vitesse d'exhumation de 0.4 à 3 km\Ma il y a 6 Ma. Cette accélération de l'exhumation est confirmée par les vitesses de refroidissement de l'ordre de 100°C\Ma obtenus à partir des différents âges sur zircons et à partir des autres systèmes géochronologiques. Les données obtenues par traces de fission sur apatite nous indiquent également une exhumation rapide jusqu'à la fin du Miocène. Nous interprétons cet évènement à 6 Ma comme étant lié à l'uplift tectonique du massif de l'Aar. Le fait que cet évènement soit tardi-miocène implique qu'une augmentation des précipitations au Pliocène n'a pas engendré cette exhumation rapide du massif de l'Aar. La crise Messinienne de la mer méditerranée n'a pas pu avoir une incidence directe sur l'érosion du massif de l'Aar mais l'érosion associée à ce phénomène à pu réduire le coin orogénique alpin et causer la réactivation des chevauchements du massif de l'Aar. L'exhumation rapide Miocène a été suivie pas une diminution des taux d'exhumation lors des derniers 6 Ma (jusqu'à 1.3 km\Ma). Cependant, les âges (U-Th)\He sur apatite ainsi que les traces de fission sur apatite des échantillons du tunnel enregistrent une perturbation de l'enregistrement décrit ci-dessus. Les âges obtenus sur les apatites sont sensiblement plus jeunes sous l'axe de la vallée en comparaison du profil d'âges attendus. En effet, on attendrait des âges plus jeunes sous les parties les plus profondes du tunnel à cause de l'advection de la chaleur dans les flancs de la vallée. La vallée est creusée dans des schistes alors que les flancs de celle-ci sont constitués de granitoïdes plus durs. En accord avec les observations hydrologiques du tunnel, nous suggérons que la perméabilité élevée des roches sous l'axe de la vallée à permi l'infiltration de fluides géothermaux qui a généré un réchauffement des roches. Ce réchauffement aurait donc induit une perte d'Hélium et un recuit des traces de fission dans les apatites. Ceci résulterait en un rajeunissement des âges apatite et en une sous-estimation des vitesses d'exhumation sous l'axe de la vallée. Cette étude à servi à démontrer la nécessité d'un échantillonnage fin et précis afin d'apporter des données thermochronologiques de qualité dans le contexte alpin. Cette approche multi-système nous a permi de contrôler la pertinence des données acquises ainsi que d'identifier les sources possibles d'erreurs lors d'études thermochronologiques. RÉSUMÉ LARGE PUBLIC Lors d'une orogenèse, les roches subissent un cycle comprenant une subduction, de la déformation, du métamorphisme et, finalement, un retour à la surface (ou exhumation). L'exhumation résulte de la déformation au sein de la zone de collision, menant à un raccourcissement et un apaissessement de l'édifice rocheux, qui se traduit par une remontée des roches, création d'une topographie et érosion. Puisque l'érosion agit comme un racloir sur la partie supérieure de l'édifice, des tentatives de corrélation entre les épisodes d'exhumation rapide et les périodes d'érosion intensive, dues aux changements climatiques, ont été effectuées. La connaissance de la chronologie et du lieu précis est d'une importance capitale pour une quelconque reconstruction de l'évolution d'une chaîne de montagne. Ces critères sont donnés par un retraçage des changements de la température de la roche en fonction du temps, nous donnant le taux de refroidissement. L'instant auquel les roches ont refroidit, passant une certaine température, est contraint par l'application de techniques de datation par radiométrie. Ces méthodes reposent sur la désintégration des isotopes radiogéniques, tels que l'uranium et le potassium, tous deux abondants dans les roches de la croûte terrestre. Les produits de cette désintégration ne sont pas retenus dans les minéraux hôtes jusqu'au moment du refroidissement de la roche sous une température appelée 'de fermeture' , spécifique à chaque système de datation. Par exemple, la désintégration radioactive des atomes d'uranium et de thorium produit des atomes d'hélium qui s'échappent d'un cristal de zircon à des températures supérieures à 200°C. En mesurant la teneur en uranium-parent, l'hélium accumulé et en connaissant le taux de désintégration, il est possible de calculer à quel moment la roche échantillonnée est passée sous la température de 200°C. Si le gradient géothermal est connu, les températures de fermeture peuvent être converties en profondeurs actuelles (p. ex. 200°C ≈ 7km), et le taux de refroidissement en taux d'exhumation. De plus, en datant par système radiométrique des échantillons espacés verticalement, il est possible de contraindre directement le taux d'exhumation de la section échantillonnée en observant les différences d'âges entre des échantillons voisins. Dans les Alpes suisses, le massif de l'Aar forme une structure orographique majeure. Avec des altitudes supérieures à 4000m et un relief spectaculaire de plus de 2000m, le massif domine la partie centrale de la chaîne de montagne. Les roches aujourd'hui exposées à la surface ont été enfouies à plus de 10 km de profond il y a 20 Ma, mais la topographie actuelle du massif de l'Aar semble surtout s'être développée par un soulèvement actif depuis quelques millions d'années, c'est-à-dire depuis le Néogène supérieur. Cette période comprend un changement climatique soudain ayant touché l'Europe il y a environ 5 Ma et qui a occasionné de fortes précipitations, entraînant certainement une augmentation de l'érosion et accélérant l'exhumation des Alpes. Dans cette étude, nous avons employé le système de datation (U-TH)/He sur zircon, dont la température de fermeture de 200°C est suffisamment basse pour caractériser l'exhumation du Néogène sup. /Pliocène. Les échantillons proviennent du Lötschental et du tunnel ferroviaire le plus profond du monde (NEAT) situé dans la partie ouest du massif de l'Aar. Considérés dans l'ensemble, ces échantillons se répartissent sur un dénivelé de 3000m et des âges de 5.1 à 9.4 Ma. Les échantillons d'altitude supérieure (et donc plus vieux) documentent un taux d'exhumation de 0.4 km/Ma jusqu'à il y a 6 Ma, alors que les échantillons situés les plus bas ont des âges similaires allant de 6 à 5.4 Ma, donnant un taux jusqu'à 3km /Ma. Ces données montrent une accélération dramatique de l'exhumation du massif de l'Aar il y a 6 Ma. L'exhumation miocène sup. du massif prédate donc le changement climatique Pliocène. Cependant, lors de la crise de salinité d'il y a 6-5.3 Ma (Messinien), le niveau de la mer Méditerranée est descendu de 3km. Un tel abaissement de la surface d'érosion peut avoir accéléré l'exhumation des Alpes, mais le bassin sud alpin était trop loin du massif de l'Aar pour influencer son érosion. Nous arrivons à la conclusion que la datation (U-Th)/He permet de contraindre précisément la chronologie et l'exhumation du massif de l'Aar. Concernant la dualité tectonique-érosion, nous suggérons que, dans le cas du massif de l'Aar, la tectonique prédomine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes results from a study evaluating the use of stringless paving using a combination of global positioning and laser technologies. CMI and Geologic Computer Systems developed this technology and successfully implemented it on construction earthmoving and grading projects. Concrete paving is a new area for considering this technology. Fred Carlson Co. agreed to test the stringless paving technology on two challenging concrete paving projects located in Washington County, Iowa. The evaluation was conducted on two paving projects in Washington County, Iowa, during the summer of 2003. The research team from Iowa State University monitored the guidance and elevation conformance to the original design. They employed a combination of physical depth checks, surface location and elevation surveys, concrete yield checks, and physical survey of the control stakes and string line elevations. A final check on profile of the pavement surface was accomplished by the use of the Iowa Department of Transportation Light Weight Surface Analyzer (LISA). Due to the speed of paving and the rapid changes in terrain, the laser technology was abandoned for this project. Total control of the guidance and elevation controls on the slip-form paver were moved from string line to global positioning systems (GPS). The evaluation was a success, and the results indicate that GPS control is feasible and approaching the desired goals of guidance and profile control with the use of three dimensional design models. Further enhancements are needed in the physical features of the slipform paver oil system controls and in the computer program for controlling elevation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Maintaining therapeutic concentrations of drugs with a narrow therapeutic window is a complex task. Several computer systems have been designed to help doctors determine optimum drug dosage. Significant improvements in health care could be achieved if computer advice improved health outcomes and could be implemented in routine practice in a cost effective fashion. This is an updated version of an earlier Cochrane systematic review, by Walton et al, published in 2001. OBJECTIVES: To assess whether computerised advice on drug dosage has beneficial effects on the process or outcome of health care. SEARCH STRATEGY: We searched the Cochrane Effective Practice and Organisation of Care Group specialized register (June 1996 to December 2006), MEDLINE (1966 to December 2006), EMBASE (1980 to December 2006), hand searched the journal Therapeutic Drug Monitoring (1979 to March 2007) and the Journal of the American Medical Informatics Association (1996 to March 2007) as well as reference lists from primary articles. SELECTION CRITERIA: Randomized controlled trials, controlled trials, controlled before and after studies and interrupted time series analyses of computerized advice on drug dosage were included. The participants were health professionals responsible for patient care. The outcomes were: any objectively measured change in the behaviour of the health care provider (such as changes in the dose of drug used); any change in the health of patients resulting from computerized advice (such as adverse reactions to drugs). DATA COLLECTION AND ANALYSIS: Two reviewers independently extracted data and assessed study quality. MAIN RESULTS: Twenty-six comparisons (23 articles) were included (as compared to fifteen comparisons in the original review) including a wide range of drugs in inpatient and outpatient settings. Interventions usually targeted doctors although some studies attempted to influence prescriptions by pharmacists and nurses. Although all studies used reliable outcome measures, their quality was generally low. Computerized advice for drug dosage gave significant benefits by:1.increasing the initial dose (standardised mean difference 1.12, 95% CI 0.33 to 1.92)2.increasing serum concentrations (standradised mean difference 1.12, 95% CI 0.43 to 1.82)3.reducing the time to therapeutic stabilisation (standardised mean difference -0.55, 95%CI -1.03 to -0.08)4.reducing the risk of toxic drug level (rate ratio 0.45, 95% CI 0.30 to 0.70)5.reducing the length of hospital stay (standardised mean difference -0.35, 95% CI -0.52 to -0.17). AUTHORS' CONCLUSIONS: This review suggests that computerized advice for drug dosage has some benefits: it increased the initial dose of drug, increased serum drug concentrations and led to a more rapid therapeutic control. It also reduced the risk of toxic drug levels and the length of time spent in the hospital. However, it had no effect on adverse reactions. In addition, there was no evidence to suggest that some decision support technical features (such as its integration into a computer physician order entry system) or aspects of organization of care (such as the setting) could optimise the effect of computerised advice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Before a patient can be connected to a mechanical ventilator, the controls of the apparatus need to be set up appropriately. Today, this is done by the intensive care professional. With the advent of closed loop controlled mechanical ventilation, methods will be needed to select appropriate start up settings automatically. The objective of our study was to test such a computerized method which could eventually be used as a start-up procedure (first 5-10 minutes of ventilation) for closed-loop controlled ventilation. DESIGN: Prospective Study. SETTINGS: ICU's in two adult and one children's hospital. PATIENTS: 25 critically ill adult patients (age > or = 15 y) and 17 critically ill children selected at random were studied. INTERVENTIONS: To stimulate 'initial connection', the patients were disconnected from their ventilator and transiently connected to a modified Hamilton AMADEUS ventilator for maximally one minute. During that time they were ventilated with a fixed and standardized breath pattern (Test Breaths) based on pressure controlled synchronized intermittent mandatory ventilation (PCSIMV). MEASUREMENTS AND MAIN RESULTS: Measurements of airway flow, airway pressure and instantaneous CO2 concentration using a mainstream CO2 analyzer were made at the mouth during application of the Test-Breaths. Test-Breaths were analyzed in terms of tidal volume, expiratory time constant and series dead space. Using this data an initial ventilation pattern consisting of respiratory frequency and tidal volume was calculated. This ventilation pattern was compared to the one measured prior to the onset of the study using a two-tailed paired t-test. Additionally, it was compared to a conventional method for setting up ventilators. The computer-proposed ventilation pattern did not differ significantly from the actual pattern (p > 0.05), while the conventional method did. However the scatter was large and in 6 cases deviations in the minute ventilation of more than 50% were observed. CONCLUSIONS: The analysis of standardized Test Breaths allows automatic determination of an initial ventilation pattern for intubated ICU patients. While this pattern does not seem to be superior to the one chosen by the conventional method, it is derived fully automatically and without need for manual patient data entry such as weight or height. This makes the method potentially useful as a start up procedure for closed-loop controlled ventilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Palvelukeskeiseen arkkitehtuuriin perustuvia järjestelmiä voidaan kehittää käyttämällä useita vaihtoehtoisia teknologioita. Toteuttamiseen parhaiten soveltuvia teknologioita ovat erilaiset standardikokoelmat, jotka tukevat useiden toisistaan toteutustavaltaan poikkeavien järjestelmien yhdistämistä käyttämällä standardeihin perustuvia rajapintoja. Täten kyseiset kokoelmat tukevat laajojen, useista erillisistä osista koostuvien palvelujärjestelmien kehittämistä. Tässä työssä selvitetään mikä palvelukeskeinen arkkitehtuuri on sekä millaisia komponentteja ja teknologioita kyseiseen arkkitehtuuriin perustuvan palvelujärjestelmän toteuttamiseen tarvitaan. Tavoitteena on esitellä palvelukeskeinen arkkitehtuuri ja siihen liittyvät teknologiat sekä suunnitella korkealla tasolla kyseisiä teknologioita hyödyntävä integraatioalusta käyttäjien ja useiden palvelun tarjoajien yhdistämiseksi. Työn tuloksena syntyvän teknologiaselvitysraportin selvitysluonteen vuoksi työssä ei suunnitella tarkasti toteutettavaa järjestelmää vaan ainoastaanpohjustetaan järjestelmän suunnittelua sekä esitellään siihen liittyviä arkkitehtuuri- ja toteutusmahdollisuuksia. Internet-pohjainen palvelukeskeiseen arkkitehtuuriin perustuva järjestelmä voi perustua suoriinverkon välisiin yhteyksiin tai vaihtoehtoisesti erityyppisiin välitason sovelluksiin. Välitason sovellukset mahdollistavat niiden tyypistä riippuen järjestelmän erilaisten lisäominaisuuksien toteuttamisen ja tukevat useita yleisimpiä palvelukeskeisiä teknologioita. Työn tuloksena syntyneen teknologiaselvityksen perusteella näistä teknologioista Web-palvelu -arkkitehtuuri ja siihen liittyvät menetelmät soveltuvat parhaiten suunniteltavan järjestelmän rakenneosiksi. Järjestelmän tarkemmat toteutustavat riippuvat myöhemmin määriteltävistä yksityiskohtaisista vaatimuksista sekä valittavasta välitason ohjelmiston toteutuksesta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sähköisen kaupankäynnin kasvun myötä, itsenäisten yritysten tietojärjestelmien integraation tarve on moninkertaistunut viime vuosien aikana. Yritykset ovat huomanneet, että tilaus-toimitusketjun automatisointiin tähtäävällä kokonaisvaltaisella integraatio-ratkaisulla on mahdollista päästä kattaviin kustannussäästöihin sekä tulojen kasvuun. Pääsääntöisesti yritykset kuitenkin etenevät hitaammin, integroimalla aluksi pienempiä liiketoiminnan tietojärjestelmien toimintoja. Positiivisten kokemusten perusteella yritykset ovat valmiitalaajentamaan sähköisen kaupankäynnin automatisointia myös muissa toiminnoissa. Tässä työssä keskitytään tarkastelemaan eri lähestymistapojayritystenvälisen integraation toteuttamiseen, sekä analysoimaan eri keinojen liiketoiminnallisia ja teknisiä vaikutuksia. Työ on tehty yhteistyössä UPM-KymmeneWood Oy:n kanssa, jonka tavoitteena oli saada perusteelliset tiedot yrityksenvälisestä integraatiosta ja syventää tietoja sekä integraatio-palveluita tarjoavien kolmansien osapuolten toimintatavoista että heidän tarjoamista palveluista ja niiden käyttökelpoisuudesta puutuoteteollisuudessa toimivassa yrityksessä. Käytännön osuudessa on tarkemmin esitelty integraatio-palveluita tarjoavien operaattoreiden kanssa käytyjen palaverien sekä heidän toimittamien materiaalien perusteella tehdyn tutkimustyön tuloksia, sisältäen yksityiskohtaiset kuvaukset yritystenvälisen integraation mahdollistavista palveluista.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työ tutkii yritysportaalin roolia organisaation tietojohtamisessa. Tutkimusongelman ratkaisemiseksi luodaan viitekehys, jossa yritysportaalin ja tietojohtamisen teoriat linkittyvät. Työn empiirisessä osassa viitekehys on pohjana case-yritykselle rakennettavalle yritysportaalille. Laadullinen tutkimus käsittää teoriaosuuden sekä osallistuvaan case-tutkimukseen perustuvan empiriaosuuden. Työn runko muodostuu kahden vastakkaisen tietojohtamisajattelun vuoropuhelusta, jotka ovat informaatioteknologiaan- ja strategiseen johtamiseen perustuvat näkökulmat. Toimivan tietojohtamismallin täytyy sisältää molemmat aspektit. Jokainen organisaatio tarvitsee informaation hallintaan liittyviä toiminnallisuuksia ja täten eksplisiittisen tiedon hallinta tietojärjestelmien avulla on onnistuneen tietojohtamisen kulmakiviä. Tätä perusinfrastruktuuria on mahdollista laajentaa hiljaisen tiedon hallintaan perustuvilla tietojohtamismenetelmillä. Työn ratkaisu näiden kahden näkemyksen, 'kovan' informaatioteknogiaan painottuvan sekä 'pehmeän' ihmisnäkökulman integrointiin, on yritysportaali. Työssä käytettävä yritysportaalin viitekehys rakentuu kolmeen päätoiminnallisuuteen; sisällönhallintaan, yhteistyöominaisuuksiin ja liiketoimintatiedon hallintaan. Työ todistaa yhteyden viitekehyksen sekä tietojohtamisen perusmallien, kuten tietojohtamisen prosessimallin sekä tietoympäristöjen välillä. Yritysportaali voi täten toimia, ei ainoastaan yksittäisten tietojohtamistyökalujen implementoinnissa, vaan tietojohtamisstrategian luomisen apuna tarjoten alustan tai 'katalyytin' kokonaisvaltaiselle tietojohtamiselle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työssä esitellään Web Services -konseptia ja sovellusintegraatiota, sekä toteutetaan toiminnanohjausjärjestelmästä yrityksen asiakkaalle tietoja välittävään palveluun tuki Web Services -arkkitehtuurille. Palvelun tehtävänä on huolehtia yrityksen ja sen liiketoimintakumppaneiden välisestä XML-pohjaisesta viestiliikenteestä. Työn teoriaosassa perehdytään sovellusintegraatioon ja sen osa-alueisiin, Web Services -konseptiin ja -standardeihin sekä sen kanssa kilpaileviin tekniikoihin. Käytännön osuudessa toteutetaan paperiteollisuusyrityksen tilaus- ja toimitustietoja XML:n avulla lähettävään Java-pohjaiseen järjestelmään tuki SOAP-viesteille, sekä WSDL-kuvaukset palveluille. Työssä tutkitaan Web Services -arkkitehtuurin soveltuvuutta ja implementoinnin helppoutta toimintaohjausjärjestelmään. Tuloksena todetaan Web Services -arkkitehtuurin olevan mielenkiintoinen ja monella tapaa integrointia helpottava tekniikka. Arkkitehtuurin standardien puutteiden ja varhaisten versioiden takia tekniikan todetaan kuitenkin olevan monelta osin riittämätön ja epäkypsä toteuttamaan yritysten kriittisten järjestelmien integrointia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ohjelmistojen tietoturva on noussut viime aikoina entistä tärkeämpään rooliin. Ohjelmistojen suunnittelu pitää alusta alkaen hoitaa siten, että tietoturva tulee huomioitua. Ohjelman helppokäyttöisyys ei saisi ajaa tietoturvan edelle, eikä myöskään ohjeiden lukematta jättäminen saa tarkoittaa tietoturvan menetystä. Tärkeä osa ohjelmistojen tietoturvaa on myös ohjelmiston laillinen käyttö. Se miten laiton käyttö estetään sen sijaan on erittäin vaikeaa toteuttaa nykyjärjestelmissä. Työn tarkoituksena oli tutkia Intellitel Communications Oy:n sanomayhdyskäytävää, Intellitel Messaging Gateway, tuotetietoturvan näkökulmasta, löytää sieltä mahdolliset virheet ja myös korjata ne.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of the Internet had a great impact on distance education and rapidly e-learning has become a killer application. Education institutions worldwide are taking advantage of the available technology in order to facilitate education to a growing audience. Everyday, more and more people use e-learning systems, environments and contents for both training and learning. E-learning promotes educationamong people that due to different reasons could not have access to education: people who could nottravel, people with very little free time, or withdisabilities, etc. As e-learning systems grow and more people are accessing them, it is necessary to consider when designing virtual environments the diverse needs and characteristics that different users have. This allows building systems that people can use easily, efficiently and effectively, where the learning process leads to a good user experience and becomes a good learning experience.