918 resultados para Software testing. Problem-oriented programming. Teachingmethodology


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most crucial tasks for a company offering a software product is to decide what new features should be implemented in the product’s forthcoming versions. Yet, existing studies show that this is also a task with which many companies are struggling. This problem has been claimed to be ambiguous and changing. There are better or worse solutions to the problem, but no optimal one. Furthermore, the criteria determining the success of the solution keeps changing due to continuously changing competition, technologies and market needs. This thesis seeks to gain a deeper understanding of the challenges that companies have reportedly faced in determining the requirements for their forthcoming product versions. To this end, product management related activities are explored in seven companies. Following grounded theory approach, the thesis conducts four iterations of data analysis, where each of the iterations goes beyond the previous one. The thesis results in a theory proposal intended to 1) describe the essential characteristics of organizations’ product management challenges, 2) explain the origins of the perceived challenges and 3) suggest strategies to alleviate the perceived challenges. The thesis concludes that current product management approaches are becoming inadequate to deal with challenges that have multiple and conflicting interpretations, different value orientations, unclear goals, contradictions and paradoxes. This inadequacy continues to increase until current beliefs and assumptions about the product management challenges are questioned and a new paradigm for dealing with the challenges is adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä diplomityössä määritellään biopolttoainetta käyttävän voimalaitoksen käytönaikainen tuotannon optimointimenetelmä. Määrittelytyö liittyy MW Powerin MultiPower CHP –voimalaitoskonseptin jatkokehitysprojektiin. Erilaisten olemassa olevien optimointitapojen joukosta valitaan tarkoitukseen sopiva, laitosmalliin ja kustannusfunktioon perustuva menetelmä, jonka tulokset viedään automaatiojärjestelmään PID-säätimien asetusarvojen muodossa. Prosessin mittaustulosten avulla lasketaan laitoksen energia- ja massataseet, joiden tuloksia käytetään seuraavan optimointihetken lähtötietoina. Optimoinnin kohdefunktio on kustannusfunktio, jonka termit ovat voimalaitoksen käytöstä aiheutuvia tuottoja ja kustannuksia. Prosessia optimoidaan säätimille annetut raja-arvot huomioiden niin, että kokonaiskate maksimoituu. Kun laitokselle kertyy käyttöikää ja historiadataa, voidaan prosessin optimointia nopeuttaa hakemalla tilastollisesti historiadatasta nykytilanteen olosuhteita vastaava hetki. Kyseisen historian hetken katetta verrataan kustannusfunktion optimoinnista saatuun katteeseen. Paremman katteen antavan menetelmän laskemat asetusarvot otetaan käyttöön prosessin ohjausta varten. Mikäli kustannusfunktion laskenta eikä historiadatan perusteella tehty haku anna paranevaa katetta, niiden laskemia asetusarvoja ei oteta käyttöön. Sen sijaan optimia aletaan hakea deterministisellä optimointialgoritmilla, joka hakee nykyhetken ympäristöstä paremman katteen antavia säätimien asetusarvoja. Säätöjärjestelmä on mahdollista toteuttaa myös tulevaisuutta ennustavana. Työn käytännön osuudessa voimalaitosmalli luodaan kahden eri mallinnusohjelman avulla, joista toisella kuvataan kattilan ja toisella voimalaitosprosessin toimintaa. Mallinnuksen tuloksena saatuja prosessiarvoja hyödynnetään lähtötietoina käyttökatteen laskennassa. Kate lasketaan kustannusfunktion perusteella. Tuotoista suurimmat liittyvät sähkön ja lämmön myyntiin sekä tuotantotukeen, ja suurimmat kustannukset liittyvät investoinnin takaisinmaksuun ja polttoaineen ostoon. Kustannusfunktiolle tehdään herkkyystarkastelu, jossa seurataan katteen muutosta prosessin teknisiä arvoja muutettaessa. Tuloksia vertaillaan referenssivoimalaitoksella suoritettujen verifiointimittausten tuloksiin, ja havaitaan, että tulokset eivät ole täysin yhteneviä. Erot johtuvat sekä mallinnuksen puutteista että mittausten lyhyehköistä tarkasteluajoista. Automatisoidun optimointijärjestelmän käytännön toteutusta alustetaan määrittelemällä käyttöön otettava optimointitapa, siihen liittyvät säätöpiirit ja tarvittavat lähtötiedot. Projektia tullaan jatkamaan järjestelmän ohjelmoinnilla, testauksella ja virityksellä todellisessa voimalaitosympäristössä ja myöhemmin ennustavan säädön toteuttamisella.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En option är ett finansiellt kontrakt som ger dess innehavare en rättighet (men medför ingen skyldighet) att sälja eller köpa någonting (till exempel en aktie) till eller från säljaren av optionen till ett visst pris vid en bestämd tidpunkt i framtiden. Den som säljer optionen binder sig till att gå med på denna framtida transaktion ifall optionsinnehavaren längre fram bestämmer sig för att inlösa optionen. Säljaren av optionen åtar sig alltså en risk av att den framtida transaktion som optionsinnehavaren kan tvinga honom att göra visar sig vara ofördelaktig för honom. Frågan om hur säljaren kan skydda sig mot denna risk leder till intressanta optimeringsproblem, där målet är att hitta en optimal skyddsstrategi under vissa givna villkor. Sådana optimeringsproblem har studerats mycket inom finansiell matematik. Avhandlingen "The knapsack problem approach in solving partial hedging problems of options" inför en ytterligare synpunkt till denna diskussion: I en relativt enkel (ändlig och komplett) marknadsmodell kan nämligen vissa partiella skyddsproblem beskrivas som så kallade kappsäcksproblem. De sistnämnda är välkända inom en gren av matematik som heter operationsanalys. I avhandlingen visas hur skyddsproblem som tidigare lösts på andra sätt kan alternativt lösas med hjälp av metoder som utvecklats för kappsäcksproblem. Förfarandet tillämpas även på helt nya skyddsproblem i samband med så kallade amerikanska optioner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes an approach to overcoming the complexity of software product management (SPM) and consists of several studies that investigate the activities and roles in product management, as well as issues related to the adoption of software product management. The thesis focuses on organizations that have started the adoption of SPM but faced difficulties due to its complexity and fuzziness and suggests the frameworks for overcoming these challenges using the principles of decomposition and iterative improvements. The research process consisted of three phases, each of which provided complementary results and empirical observation to the problem of overcoming the complexity of SPM. Overall, product management processes and practices in 13 companies were studied and analysed. Moreover, additional data was collected with a survey conducted worldwide. The collected data were analysed using the grounded theory (GT) to identify the possible ways to overcome the complexity of SPM. Complementary research methods, like elements of the Theory of Constraints were used for deeper data analysis. The results of the thesis indicate that the decomposition of SPM activities depending on the specific characteristics of companies and roles is a useful approach for simplifying the existing SPM frameworks. Companies would benefit from the results by adopting SPM activities more efficiently and effectively and spending fewer resources on its adoption by concentrating on the most important SPM activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a formulation of the contact with friction between elastic bodies. This is a non linear problem due to unilateral constraints (inter-penetration of bodies) and friction. The solution of this problem can be found using optimization concepts, modelling the problem as a constrained minimization problem. The Finite Element Method is used to construct approximation spaces. The minimization problem has the total potential energy of the elastic bodies as the objective function, the non-inter-penetration conditions are represented by inequality constraints, and equality constraints are used to deal with the friction. Due to the presence of two friction conditions (stick and slip), specific equality constraints are present or not according to the current condition. Since the Coulomb friction condition depends on the normal and tangential contact stresses related to the constraints of the problem, it is devised a conditional dependent constrained minimization problem. An Augmented Lagrangian Method for constrained minimization is employed to solve this problem. This method, when applied to a contact problem, presents Lagrange Multipliers which have the physical meaning of contact forces. This fact allows to check the friction condition at each iteration. These concepts make possible to devise a computational scheme which lead to good numerical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An empirical study was conducted in the area of software engineering to study relationships between development, testing and intended software quality. International standards served as a starting point of the study. For analysis a round of interviews was kept and transcribed. It was found that interaction between humans is critical, especially in transferring knowledge and standards’ processes. The standards are communicated through interaction and learning processes are involved before compliance. One of the results was that testing is the key to sufficient quality. The outcome was that successful interaction, sufficient testing and compliance with the standards combined with good motivation may provide most repeatable intended quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Game development has become increasingly important in the software industry, but this importance has not affected the way software engineering approaches and methodologies manage the differences they have with game development. Similarly, software engineering does not fully support sustainability practices, causing this element to often not be considered or even known as a requirement for a development lifecycle. Goal: The aim of this thesis is to study the mode in which games are developed, and the involved sustainable aspects and the relevant concerns regarding the migration processes. Method: A quantitative study was conducted, gathering 33 answers of game professionals from four continents, from administrative (25%) and technical oriented positions (75%). Results: Three trends were observed: 1) Agile process models are used, 2) major concerns for mobile development and digital marketing, 3) minor concerns for eco-impact elements and certain development phases such as testing and crunch time development. Conclusion: Traditional Software engineering would require a major change on its processes and models to fit with modern agile development, game development approaches and sustainable requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Towards a contextual understanding of B2B salespeople’s selling competencies − an exploratory study among purchasing decision-makers of internationally-oriented technology firms The characteristics of modern selling can be classified as follows: customer retention and loyalty targets, database and knowledge management, customer relationship management, marketing activities, problem solving and system selling, and satisfying needs and creating value. For salespeople to be successful in this environment, they need a wide range of competencies. Salespeople’s selling skills are well documented in seller side literature through quantitative methods, but the knowledge, skills and competencies from the buyer’s perspective are under-researched. The existing research on selling competencies should be broadened and updated through a qualitative research perspective due to the dynamic nature and the contextual dependence of selling competencies. The purpose of the study is to increase understanding of the professional salesperson’s selling competencies from the industrial purchasing decision- makers’ viewpoint within the relationship selling context. In this study, competencies are defined as sales-related knowledge and skills. The scope of the study includes goods, materials and services managed by a company’s purchasing function and used by an organization on a daily basis. The abductive approach and ‘systematic combining’ have been applied as a research strategy. In this research, data were generated through semi- structured, person-to-person interviews and open-ended questions. The study was conducted among purchasing decision-makers in the technology industry in Finland. The branches consisted of the electronics and electro-technical industries and the mechanical engineering and metals industries. A total of 30 companies and one purchasing decision-maker from each company were purposively chosen for the sampling. The sample covers different company sizes based on their revenues, their differing structures – varying from public to family companies –that represent domestic and international ownerships. Before analyzing the data, they were organized by the purchasing orientations of the buyers: the buying, procurement or supply management orientation. Thematic analysis was chosen as the analysis method. After analyzing the data, the results were contrasted with the theory. There was a continuous interaction between the empirical data and the theory. Based on the findings, a total of 19 major knowledge and skills were identified from the buyers’ perspective. The specific knowledge and skills from the viewpoint of customers’ prevalent purchasing orientations were divided into two categories, generic and contextual. The generic knowledge and skills apply to all purchasing orientations, and the contextual knowledge and skills depend on customers’ prevalent purchasing orientations. Generic knowledge and skills relate to price setting, negotiation, communication and interaction skills, while contextual ones relate to knowledge brokering, ability to present solutions and relationship skills. Buying-oriented buyers value salespeople who are ‘action oriented experts, however at a bit of an arm’s length’, procurement buyers value salespeople who are ‘experts deeply dedicated to the customer and fostering the relationship’ and supply management buyers value salespeople who are ‘corporate-oriented experts’. In addition, the buyer’s perceptions on knowledge and selling skills differ from the seller’s ones. The buyer side emphasizes managing the subject matter, consisting of the expertise, understanding the customers’ business and needs, creating a customized solution and creating value, reliability and an ability to build long-term relationships, while the seller side emphasizes communica- tion, interaction and salesmanship skills. The study integrates the selling skills of the current three-component model− technical knowledge, salesmanship skills, interpersonal skills− and relationship skills and purchasing orientations, into a selling competency model. The findings deepen and update the content of these knowledges and skills in the B2B setting and create new insights into them from the buyer’s perspective, and thus the study increases contextual understanding of selling competencies. It generates new knowledge of the salesperson’s competencies for the relationship selling and personal selling and sales management literature. It also adds knowledge of the buying orientations to the buying behavior literature. The findings challenge sales management to perceive salespeople’s selling skills both from a contingency and competence perspective. The study has several managerial implications: it increases understanding of what the critical selling knowledge and skills from the buyer’s point of view are, understanding of how salespeople effectively implement the relationship marketing concept, sales management’s knowledge of how to manage the sales process more effectively and efficiently, and the knowledge of how sales management should develop a salesperson’s selling competencies when managing and developing the sales force. Keywords: selling competencies, knowledge, selling skills, relationship skills, purchasing orientations, B2B selling, abductive approach, technology firms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the master’s thesis was to develop a model to build a service quality centric customer reference portfolio for a software as a service company. The case company is Meltwater Finland Oy that leverages customer references externally but there is no systematic model to produce good quality customer references that are in line with the company strategy. The project was carried out as a case study, where the primary source of information were seventeen internal interviews with the employees of the case company. The theory part focuses on customer references as assets and service quality in software as a service industry. In the empirical part the research problem is solved. As a result of the case study, the model to build a service quality centric customer reference portfolio was created and further research areas were suggested.