56 resultados para J900 Others in Technology
Resumo:
Hihnakuljettimia käytetään muun muassa voimalaitos- ja sellutehdasympäristöissä kuljettamaan kiinteää polttoainetta tai haketta pitkiä matkoja. Pitkät hihnakuljettimet asennetaan yleensä teräsristikoista rakennettujen kuljetinsiltojen sisään. Kuljetinsilta toimii siis hihnakuljettimen ja hoitokäytävän runkona sekä suojaa kuljetinta ja kuljetettavaa materiaalia säältä. Tässä diplomityössä on laadittu mitoitustyökalu, jolla voidaan nopeasti mitoittaa ristikkorakenteisen hihnakuljetinsillan sauvat sekä määrittää rakenteen massa. Laskentaohjelma on toteutettu Microsoft Excel -taulukkolaskentana. Sillan poikkileikkaus voi olla suljettu symmetrinen umpirakenne tai avorakenne, jossa on ulokkeellinen hoitotaso. Ristikon sauvat ovat RHS-putkia. Rakenteeseen voi vaikuttaa jatkuvia kuormia, pistekuormia sekä näistä johdetut, ekvivalenttiin staattiseen voimaan perustuvat maanjäristyskuormat. Diplomityössä on perehdytty mitoitustyökalussa sovellettuihin teorioihin. Voimasuureiden laskenta perustuu 2-tukisina palkkeina käsiteltävien siltalohkojen ratkaisuun ja palkkianalogiaan, jossa ristikon sauvavoimat ratkaistaan käsittelemällä ristikkoa palkkina. Sauvojen kestävyyden laskenta perustuu SFS-EN 1993-1-1 -normiin, joka on osa Eurocode -rakennesuunnittelunormistoa. Lisäksi työssä on käsitelty ristikkoliitosten mitoitusta SFS-EN 1993-1-8 mukaan. Mitoitusohjelman toimivuutta on testattu tarkastelemalla esimerkkisiltaa laaditulla mitoituspohjalla ja Autodesk Robot Professional 2013 -FE-analyysiohjelmisuolla. Tulosten perusteella mitoitustyökalua voidaan käyttää ainakin tarjousvaiheen nopeisiin tarkasteluihin ja omamassan määritykseen, mutta myös lopulliseen mitoitukseen, mikäli hyväksytään konservatiivinen mitoitus.
Resumo:
Tutkimus tarkastelee kulttuuriperinnön asemaa perheyrityksen yrityskulttuurin,identiteetin ja yrityskuvan osatekijänä ja sitä, miten historiatietoisuus ohjaa yrityksen toimintaa paikallisen vaikuttavuuden näkökulmasta. Samalla tutkimus selvittää, miten liiketoiminnan laajeneminen ja globalisoituminen vaikuttavat paikallistason toimintaan ja miten yritys käyttää kulttuuriperintöään muutosten hallinnassa. Tutkimus on historiallinen tapaustutkimus. Esimerkkitapauksena on A. Ahlström Osakeyhtiö ja Länsi-Suomessa sijaitseva Noormarkun ruukki, joka on ollut yrityksen omistuksessa 1870-luvulta lähtien. 1800-luvun lopusta 2000-luvulle ulottuva tutkimus perustuu pääasiassa arkistolähteisiin. Tutkimuksessa yhdistyvät kulttuuriperinnön ja liiketaloustieteen tutkimuksen sekä historiantutkimuksen lähestymistavat. Paikallistason toiminnan tarkastelussa tutkimus saa piirteitä myös tehdasyhdyskuntatutkimuksesta. Tutkimuksessa perheyrityksen kulttuuriperintöön sisällytetään aineellisen kulttuuriperinnön ja menneisyyden kuvausten lisäksi yrityksen jatkuvuuteen, omistajuuteen, kotipaikkaan ja yhteiskuntavastuuseen liittyvät perinteet. Tutkimus osoittaa, että historia ja perinteet voivat olla yritykselle voimavara mutta myös hidaste. Perheyhtiön traditio on kestänyt liiketoiminnallisista ja organisatorisista muutoksista huolimatta. Suvun rooli yhtiön sisällä on kuitenkin muuttunut. Perhejohtajuudesta on siirrytty sukuomistukseen. Ahlström on tyyppiesimerkki siitä, miten yritykset korostavat historian avulla toimintansa jatkuvuutta, luotettavuutta, uskottavuutta ja innovatiivisuutta. Yritys on rakentanut historiallista yrityskuvaansa muun muassa museoiden, näyttelyiden ja julkaisujen avulla. Yrityksen suhde historialliseen kotipaikkaansa, Noormarkkuun, on ollut poikkeuksellisen kiinteä. Noormarkun ruukki on esimerkki siitä, miten yrityksen menneisyyteen liittyvistä paikoista voi muodostua symbolisia muistin paikkoja. Historiatietoisuus synnyttää ja ylläpitää paikallisvastuuta, joka on ilmennyt eri aikakausina eri tavoin. Ruukin toimintoja on kehitetty, kun vanhat toiminnot ovat päättyneet. 2000-luvulla yritys hyödyntää kulttuuriperintöään liiketoiminnan resurssina, paikallisen liiketoiminnan ja matkailun kehittämisen lähtökohdista. Kansainvälisissä tutkimuksissa historian aktiivista käyttöä yrityksen strategisessa suunnittelussa kuvataan käsitteellä history management. Suomessa aihetta on toistaiseksi tutkittu vähän. Tutkimus on siten avaus yrityksiä kulttuuriperinnön säilyttäjinä ja käyttäjinä koskeville keskusteluille.
Resumo:
The future of privacy in the information age is a highly debated topic. In particular, new and emerging technologies such as ICTs and cognitive technologies are seen as threats to privacy. This thesis explores images of the future of privacy among non-experts within the time frame from the present until the year 2050. The aims of the study are to conceptualise privacy as a social and dynamic phenomenon, to understand how privacy is conceptualised among citizens and to analyse ideal-typical images of the future of privacy using the causal layered analysis method. The theoretical background of the thesis combines critical futures studies and critical realism, and the empirical material is drawn from three focus group sessions held in spring 2012 as part of the PRACTIS project. From a critical realist perspective, privacy is conceptualised as a social institution which creates and maintains boundaries between normative circles and preserves the social freedom of individuals. Privacy changes when actors with particular interests engage in technology-enabled practices which challenge current privacy norms. The thesis adopts a position of technological realism as opposed to determinism or neutralism. In the empirical part, the focus group participants are divided into four clusters based on differences in privacy conceptions and perceived threats and solutions. The clusters are fundamentalists, pragmatists, individualists and collectivists. Correspondingly, four ideal-typical images of the future are composed: ‘drift to low privacy’, ‘continuity and benign evolution’, ‘privatised privacy and an uncertain future’, and ‘responsible future or moral decline’. The images are analysed using the four layers of causal layered analysis: litany, system, worldview and myth. Each image has its strengths and weaknesses. The individualistic images tend to be fatalistic in character while the collectivistic images are somewhat utopian. In addition, the images have two common weaknesses: lack of recognition of ongoing developments and simplistic conceptions of privacy based on a dichotomy between the individual and society. The thesis argues for a dialectical understanding of futures as present images of the future and as outcomes of real processes and mechanisms. The first steps in promoting desirable futures are the awareness of privacy as a social institution, the awareness of current images of the future, including their assumptions and weaknesses, and an attitude of responsibility where futures are seen as the consequences of present choices.
Resumo:
In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
Advances in technology have provided new ways of using entertainment and game technology to foster human interaction. Games and playing with games have always been an important part of people’s everyday lives. Traditionally, human-computer interaction (HCI) research was seen as a psychological cognitive science focused on human factors, with engineering sciences as the computer science part of it. Although cognitive science has made significant progress over the past decade, the influence of people’s emotions on design networks is increasingly important, especially when the primary goal is to challenge and entertain users (Norman 2002). Game developers have explored the key issues in game design and identified that the driving force in the success of games is user experience. User-centered design integrates knowledge of users’ activity practices, needs, and preferences into the design process. Geocaching is a location-based treasure hunt game created by a community of players. Players use GPS (Global Position System) technology to find “treasures” and create their own geocaches; the game can be developed when the players invent caches and used more imagination to creations the caches. This doctoral dissertation explores user experience of geocaching and its applications in tourism and education. Globally, based on the Geocaching.com webpage, geocaching has been played about 180 countries and there are more than 10 million registered geocachers worldwide (Geocaching.com, 25.11.2014). This dissertation develops and presents an interaction model called the GameFlow Experience model that can be used to support the design of treasure hunt applications in tourism and education contexts. The GameFlow Model presents and clarifies various experiences; it provides such experiences in a real-life context, offers desirable design targets to be utilized in service design, and offers a perspective to consider when evaluating the success of adventure game concepts. User-centered game designs have adapted to human factor research in mainstream computing science. For many years, the user-centered design approach has been the most important research field in software development. Research has been focusing on user-centered design in software development such as office programs, but the same ideas and theories that will reflect the needs of a user-centered research are now also being applied to game design (Charles et al. 2005.) For several years, we have seen a growing interest in user experience design. Digital games are experience providers, and game developers need tools to better understand the user experience related to products and services they have created. This thesis aims to present what the user experience is in geocaching and treasure hunt games and how it can be used to develop new concepts for the treasure hunt. Engineers, designers, and researchers should have a clear understanding of what user experience is, what its parts are, and most importantly, how we can influence user satisfaction. In addition, we need to understand how users interact with electronic products and people, and how different elements synergize their experiences. This doctoral dissertation represents pioneering work on the user experience of geocaching and treasure hunt games in the context of tourism and education. The research also provides a model for game developers who are planning treasure hunt concepts.
Resumo:
Financial industry has recently encountered many changes in the business environment. Increased regulation together with growing competition is forcing commercial banks to rethink their business models. In order to maintain profitability in the new environment, banks are focusing more into activities that yield noninterest income. This is a shift away from the traditional intermediation function of banks. This study aims to answer the question if the shift from traditional income yielding activities to more innovative noninterest activities is logical in terms of profitability and risk in Nordics. This study also aims to answer the question if diversification within the noninterest income categories has impact on profitability and risk and if there are certain categories of noninterest income that are better than others in terms of profitability and risk in Nordics. Results show that diversification between interest and noninterest activities and increase in the share of noninterest income have a negative impact on the risk adjusted returns and risk profile. Results also show that further diversification within the noninterest income categories has negative impact on risk adjusted profitability and risk while an increase of the share of commission and fee income category of total noninterest income has a positive impact on risk adjusted profitability and risk. Results are logical and in line with previous research (De Young & Roland, 2001; Stiroh, 2004). Results provide useful information to banks and help them better evaluate outcomes of different income diversification strategies.
Resumo:
Tässä kandidaatintutkielmassa tarkastellaan vuonna 2008 kärjistyneen finanssikriisin aiheuttamien suhdannevaihteluiden vaikutusta Nasdaq OMX Helsinkiin listattujen teknologia-alan yritysten suorituskykyyn. Tutkielman päätavoitteena on selvittää millaisiin listautuneisiin teknologia-alan yrityksiin finanssikriisi vaikutti eniten. Analyysi toteutettiin kvantitatiivisin menetelmin hyödyntämällä klusterianalyysiä sekä Kruskal-Wallis -testiä.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
Tämä kandidaatintutkielma käsittelee asiantuntijaorganisaatioiden tilinpäätöstietojen analysointia taloudellisen kilpailija-analyysin tekniikoita hyödyntäen. Tutkielman tavoitteena oli selvittää, miten kuudessa asiantuntijaorganisaatiossa on pyritty parantamaan tai säilyttämään saavutettu markkina-asema vuosina 2011–2014. Tutkimusmuotona käytettiin tilinpäätösanalyysia. Tutkielma on case-yrityksen toimeksiannosta toteutettu taloudellinen kilpailija-analyysi. Tutkimuksen teoriaosa käsittelee taloudellista kilpailija-analyysia osana strategista johdonlaskentatoimea. Teoreettinen viitekehys rakentuu taloudellisen kilpailija-analyysin tekniikoiden, sekä näitä koskevien empiiristen tutkimusten ympärille. Empiirinen osio jakautui kolmeen osaan, joista kaksi ensimmäistä käsittelivät case-yrityksen määrittelemän yritysjoukon seulomista muutaman taloudellisen tunnusluvun avulla. Varsinainen analyysin painopiste oli kuudelle yritykselle toteutetussa taloudellisessa kilpailija-analyysissa. Teorian pohjalta taloudellisen kilpailija-analyysin tutkimuskohteiksi valittiin kustannusrakenteet, kilpailuasema sekä tilinpäätöstiedot. Tutkielman tarkasteluajanjakso oli mielenkiintoinen, sillä vuosina 2011–2013 liikevaihtojen trendi oli jyrkästi laskeva, mutta vuonna 2014 yleisilme kääntyi nousevaksi. Tarkemman analyysin pohjalta johtopäätöksissä esiin nousivat kasvun ja kannattavuuden tunnusluvut. Opinnäytetyön tutkimustuloksissa havaittiin, että parhaiten tutkimuksen ajanjaksolla liikevaihtojen romahduksesta selvinneillä yrityksillä oli laaja-ansaintapohja sekä joustava kulurakenne. Kannattavuuden parhaiten säilyttäneet yritykset olivat tehneet runsaasti säästöjä henkilöstökuluissa, jotka muodostivat koko aineistossa suurimman osuuden kuluista. Vakavaraisuudeltaan vahvimmat yritykset eivät tehneet rajuja leikkauksia kulurakenteistaan, vaan sietivät muita pidempään heikkoa kannattavuutta.
Resumo:
The study focuses on five lower secondary school pupils’ daily use of their one-toone computers, the overall aim being to investigate literacy in this form of computing. Theoretically, the study is rooted in the New Literacy tradition with an ecological perspective, in combination with socio-semiotic theory in a multimodal perspective. New Literacy in the ecological perspective focuses on literacy practices and place/space and on the links between them. Literacy is viewed as socially based, in specific situations and in recurring social practices. Socio-semiotic theory embodying the multimodal perspective is used for the text analysis. The methodology is known as socio-semiotic ethnography. The ethnographic methods encompass just over two years of fieldwork with participating observations of the five participants’ computing activities at home, at school and elsewhere. The participants, one boy and two girls from the Blue (Anemone) School and two girls from the White (Anemone) School, were chosen to reflect a broad spectrum in terms of sociocultural and socioeconomic background. The study shows the existence of a both broad and deep variation in the way digital literacy features in the participants’ one-to-one computing. These variations are associated with experience in relation to the home, the living environment, place, personal qualities and school. The more varied computer usage of the Blue School participants is connected with the interests they developed in their homes and living environments and in the computing practices undertaken in school. Their more varied usage of the computer is reflected in their broader digital literacy repertoires and their greater number and variety of digital literacy abilities. The Blue School participants’ text production is more multifaceted, covers a wider range of subjects and displays a broader palette of semiotic resources. It also combines more text types and the texts are generally longer than those of the White School participants. The Blue School girls have developed a text culture that is close to that of the school. In their case, there is clear linkage between school-initiated and self-initiated computing activities, while other participants do not have the same opportunities to link and integrate self-initiated computing activities into the school context. It also becomes clear that the Blue School girls can relate and adapt their texts to different communicative practices and recipients. In addition, the study shows that the Blue School girls have some degree of scope in their school practice as a result of incorporating into it certain communicative practices that they have developed in nonschool contexts. Quite contrary to the hopes expressed that one-to-one computing would reduce digital inequality, it has increased between these participants. Whether the same or similar results apply in a larger perspective, on a more structural level, is a question that this study cannot answer. It can only draw attention to the need to investigate the matter. The study shows in a variety of ways that the White School participants do not have the same opportunity to develop their digital literacy as the Blue School participants. In an equivalence perspective, schools have a compensational task to perform. It is abundantly clear from the study that investing in one-to-one projects is not enough to combat digital inequality and achieve the digitisation goals established for school education. Alongside their investments in technology, schools need to develop a didactic that legitimises and compensates for the different circumstances of different pupils. The compensational role of schools in this connection is important not only for the present participants but also for the community at large, in that it can help to secure a cohesive, open and democratic society.