27 resultados para Logic and Foundations
em CentAUR: Central Archive University of Reading - UK
Resumo:
This study explores the implications of an organization moving toward service-dominant logic (S-D logic) on the sales function. Driven by its customers’ needs, a service orientation by its nature requires personal interaction and sales personnel are in an ideal position to develop offerings with the customer. However, the development of S-D logic may require sales staff to develop additional skills. Employing a single case study, the study identified that sales personnel are quick to appreciate the advantages of S-D logic for customer satisfaction and six specific skills were highlighted and explored. Further, three propositions were identified: in an organization adopting S-D logic, the sales process needs to elicit needs at both embedded-value and value-in-use levels. In addition, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes. Further, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes.
Resumo:
Theorem-proving is a one-player game. The history of computer programs being the players goes back to 1956 and the ‘LT’ LOGIC THEORY MACHINE of Newell, Shaw and Simon. In game-playing terms, the ‘initial position’ is the core set of axioms chosen for the particular logic and the ‘moves’ are the rules of inference. Now, the Univalent Foundations Program at IAS Princeton and the resulting ‘HoTT’ book on Homotopy Type Theory have demonstrated the success of a new kind of experimental mathematics using computer theorem proving.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
Housing in the UK accounts for 30.5% of all energy consumed and is responsible for 25% of all carbon emissions. The UK Government’s Code for Sustainable Homes requires all new homes to be zero carbon by 2016. The development and widespread diffusion of low and zero carbon (LZC) technologies is recognised as being a key solution for housing developers to deliver against this zero-carbon agenda. The innovation challenge to design and incorporate these technologies into housing developers’ standard design and production templates will usher in significant technical and commercial risks. In this paper we report early results from an ongoing Engineering and Physical Sciences Research Council project looking at the innovation logic and trajectory of LZC technologies in new housing. The principal theoretical lens for the research is the socio-technical network approach which considers actors’ interests and interpretative flexibilities of technologies and how they negotiate and reproduce ‘acting spaces’ to shape, in this case, the selection and adoption of LZC technologies. The initial findings are revealing the form and operation of the technology networks around new housing developments as being very complex, involving a range of actors and viewpoints that vary for each housing development.
Resumo:
Modal filtering is based on the capability of single-mode waveguides to transmit only one complex amplitude function to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible in a nulling interferometer. In the present paper we focus on the progress of Integrated Optics in the thermal infrared [6-20 mu m] range, one of the two candidate technologies for the fabrication of Modal Filters, together with fiber optics. In conclusion of the European Space Agency's (ESA) "Integrated Optics for Darwin" activity, etched layers of clialcogenide material deposited on chalcogenide glass substrates was selected among four candidates as the technology with the best potential to simultaneously meet the filtering efficiency, absolute and spectral transmission, and beam coupling requirements. ESA's new "Integrated Optics" activity started at mid-2007 with the purpose of improving the technology until compliant prototypes can be manufactured and validated, expectedly by the end of 2009. The present paper aims at introducing the project and the components requirements and functions. The selected materials and preliminary designs, as well as the experimental validation logic and test benches are presented. More details are provided on the progress of the main technology: vacuum deposition in the co-evaporation mode and subsequent etching of chalcogenide layers. In addition., preliminary investigations of an alternative technology based on burying a chalcogenide optical fiber core into a chalcogenide substrate are presented. Specific developments of anti-reflective solutions designed for the mitigation of Fresnel losses at the input and output surface of the components are also introduced.
Resumo:
We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.
Resumo:
The aim of this book is to provide and introduction to microprocessor systems, their operation and design. It covers those topics needed by engineers and computer scientists who are interested in applying microprocessors in practical situations, namely computer hardware including logic and interfacing, software, in particular high level and assembly language programming, and the design and testing of such systems. The fundamental principles of micrprocessor systems are described and these are illustrated with reference to two microprocessors, the 32-bit MC68020 from Motorola and a single chip microcomputer, the 8051 from Intel; and in addition, interfacing to the general purpose STE bus is described. The details of the processors and the bus are concentrated in three chapters, thus allowing the presentation of the material to be independent of the microprocessors if that is desired, and permitting the specific details to be found easily.
Resumo:
This paper presents the evaluation in power consumption of a clocking technique for pipelined designs. The technique shows a dynamic power consumption saving of around 30% over a conventional global clocking mechanism. The results were obtained from a series of experiments of a systolic circuit implemented in Virtex-II devices. The conversion from a global-clocked pipelined design to the proposed technique is straightforward, preserving the original datapath design. The savings can be used immediately either as a power reduction benefit or to increase the frequency of operation of a design for the same power consumption.
Resumo:
Anchored in the service-dominant logic and service innovation literature, this study investigates the drivers of employee generation of ideas for service improvement (GISI). Employee GISI focuses on customer needs and providing the exact service wanted by customers. GISI should enhance competitive advantage and organizational success (cf. Berry et al. 2006; Wang and Netemeyer 2004). Despite its importance, there is little research on the idea generation stage of the service development process (Chai, Zhang, and Tan 2005). This study contributes to the service field by providing the first empirical evaluation of the drivers of GISI. It also investigates a new explanatory determinant of reading of customer needs, namely, perceived organizational support (POS), and an outcome of POS, in the form of emotional exhaustion. Results show that the major driver of GISI is reading of customer needs by employees followed by affective organizational commitment and job satisfaction. This research provides several new and important insights for service management practice by suggesting that special care should be put into selecting and recruiting employees who have the ability to read customer needs. Additionally, organizations should invest in creating work environments that encourage and reward the flow of ideas for service improvement