10 resultados para Black box approach
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
It has been shown in organizational settings that trust is a crucial factor in different kinds of outcomes, and consequently, building employee trust in the employer is a goal for all kinds of organizations. Although it is recognized that trust in organizations operates on multiple levels, at present there is no clear consensus on the concept of trust within the organization. One can have trust in particular people (i.e. interpersonal trust) or in organized systems (i.e. impersonal trust). Until recently organizational trust has been treated mainly as an interpersonal phenomenon. However, the interpersonal approach is limited. Scholars studying organizational trust have thus far focused only on specific dimensions of impersonal trust, and none have taken a comprehensive approach. The first objective in this study was to develop a construct and a scale encompassing the impersonal element of organizational trust. The second objective was to examine the effects of various HRM practices on the impersonal dimensions of organizational trust. Moreover, although the “black box” model of HRM is widely studied, there have been only a few attempts to unlock the box. Previous studies on the HRM-performance link refer to trust, and this work contributes to the literature in considering trust an impersonal issue in the relationship between HRM, trust, and performance. The third objective was thus to clarify the role of impersonal trust in the relationship between HRM and performance. The study is divided into two parts comprising the Introduction and four separate publications. Each publication addresses a distinct sub-question, whereas the Introduction discusses the overall results in the light of the individual sub-questions. The study makes two major contributions to the research on trust. Firstly, it offers a framework describing the construct of impersonal trust, which to date has not been clearly articulated in the research on organizational trust. Secondly, a comprehensive, psychometrically sound, operationally valid scale for measuring impersonal trust was developed. In addition, the study makes an empirical contribution to the research on strategic HRM. First, it shows that HRM practices affect impersonal trust and the contribution is to consider the HRM-trust link in terms of impersonal organizational trust. It is shown that each of the six HRM practices in focus is connected to impersonal trust. A further contribution lies in unlocking the black box. The study explores the impersonal element of organizational trust and its mediating role between HRM practices and performance. The result is the identification of the path by which HRM contributes to performance through the mediator of impersonal trust. It is shown that the effect on performance of HRM designed specifically to enhance employees’ impersonal trust in the organization is positive.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
Tämän kandidaatintyön tavoitteena on esittää kuvaus kulutusoppimisen teorioista ja tämän lisäksi kuvata käytännön sovelluksia liittyen kulutuskäyttäytymiseen ja mainontaan. On olemassa kaksi keskeistä ajattelutapaa liittyen oppimisen teorioihin. Ensimmäisen suuntauksen kannattajat näkevät oppimisen puhtaasti behavioristisena, eli että se on seurausta toistoista, ja siten ne näkevät yksilön "mustana laatikkona", jossa syötteenä on ärsyke ja suoritteena on tietty käytös. Toisen suuntauksen kannattajien mielestä oppiminen on kognitiivinen prosessi; kaikista yksinkertaisimmista tapauksista lähtien yksilö prosessoi informaatiota ratkaistakseen omia ongelmiaan. Käytännössä kumpaakin teoriaa tarvitaan selittämään oppimista ilmiönä, koska oppiminen on yhdistelmä toistoja ja kognitiivisia prosesseja. Työmme näyttää kuinka markkinoijat hyödyntävät näitä kahta teoriaa käytännössä mainonnassaan, tarkoituksenaan tuotemerkkinsä ja tuotteidensa asemointi markkinoilla suhteessa kilpailijoihinsa.
Resumo:
Tässä diplomityössä käsitellään palvelukeskeistä arkkitehtuuria ja sen pohjalta vanhaan järjestelmään rakennetun palvelurajapinnan laajentamista avustavan teknologian avulla. Avustavalla teknologialla automatisoidaan vanhan järjestelmän graafisen ohjelman käyttöliittymän toimintoja verkkopalveluksi. Alussa esitellään palvelukeskeisen arkkitehtuurin määritelmä ja sen mukaisia suunnitteluperiaatteita. Sen jälkeen käydään läpi teoriaa, toteutuksia ja lähestymistapoja vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria. Microsoft Windows-ympäristön tarjoama tuki avustavalle teknologialle käydään läpi. Palvelurajapinnan laajentamisessa käytettiin mustan laatikon menetelmää, jolla vanhan järjestelmän graafinen ohjelma automatisoidaan avustavan teknologian avulla. Menetelmä osoittautui toimivaksi ja sitä voidaan käyttää vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Earnings management (EM) literature examines managers’ use of judgment in financial reporting and in structuring transactions to alter financial reports for a specific reason. Mainstream EM literature strongly concentrates on statistical research methodologies and it is driven by positive accounting theory. Although EM occurs in the process of preparing corporate financial reports, that process has so far largely remained a “black box” in prior literature. The purpose of this study is to analyze what EM is, how and why it unfolds and how it is intertwined in the process of preparing corporate financial reports. In order to meet the needs of the study, a qualitative case study method will be used. The contribution of this study is threefold. First, it indicates that the concept of EM is not as unam-biguous as the prior literature has assumed. I find that EM is socially constructed and more open to interpretation than absolutely dichotomous conception given by previous studies. Second, this study contributes to our knowledge of the role and the importance of actors involved in conducting EM, indicating that EM is much more actor-dependent than the prior literature has assumed. Third, this study broadens our knowledge base with regard to the processes and potential for EM in academic research.
Resumo:
The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.