810 resultados para Technology Acceptance Model
Resumo:
Recent investigations into cross-country convergence follow Mankiw, Romer, and Weil (1992) in using a log-linear approximation to the Swan-Solow growth model to specify regressions. These studies tend to assume a common and exogenous technology. In contrast, the technology catch-up literature endogenises the growth of technology. The use of capital stock data renders the approximations and over-identification of the Mankiw model unnecessary and enables us, using dynamic panel estimation, to estimate the separate contributions of diminishing returns and technology transfer to the rate of conditional convergence. We find that both effects are important.
Resumo:
Crowdsourcing platforms that attract a large pool of potential workforce allow organizations to reduce permanent staff levels. However managing this "human cloud" requires new management models and skills. Therefore, Information Technology (IT) service providers engaging in crowdsourcing need to develop new capabilities to successfully utilize crowdsourcing in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large multinational technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end-client. This paper expands the research on vendor capabilities and IT outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing. © 2014 Elsevier B.V. All rights reserved.
Resumo:
Original method and technology of systemological «Unit-Function-Object» analysis for solving complete ill-structured problems is proposed. The given visual grapho-analytical UFO technology for the fist time combines capabilities and advantages of the system and object approaches and can be used for business reengineering and for information systems design. UFO- technology procedures are formalized by pattern-theory methods and developed by embedding systemological conceptual classification models into the system-object analysis and software tools. Technology is based on natural classification and helps to investigate deep semantic regularities of subject domain and to take proper account of system-classes essential properties the most objectively. Systemological knowledge models are based on method which for the first time synthesizes system and classification analysis. It allows creating CASE-toolkit of a new generation for organizational modelling for companies’ sustainable development and competitive advantages providing.
Resumo:
Purpose - The purpose of this paper is to construct a new e-commerce innovation and adoption model that takes into account various stages of e-commerce adoption (interactive, non-interactive and stabilised) and covers technological, organisational and environmental factors. This was tested using data collected from manufacturing and service companies in Saudi Arabia (SA) to reveal inhibitors and catalysts for e-commerce adoption. Design/methodology/approach - This study uses new data from surveys from 202 companies and then uses exploratory factor analysis and structural equation modelling for analyses. Findings - This study shows that the new stage-oriented model (SOM) is valid and can reveal specific detailed nuances of e-commerce adoption within a particular setting. Surprising results show that SA is not so very different to developed western countries in respect to e-commerce adoption. However there are some important differences which are discussed in detail. Research limitations/implications - A new SOM for e-commerce adoption is provided which may be used by other IS adoption researchers. Practical implications - Managers responsible for the adoption of e-commerce in SA, the Middle East and beyond can learn from these findings to speed up adoption rates and make e-commerce more effective. Social implications - This work may help spread e-commerce use throughout SA, the Middle East and to other developing nations. Originality/value - The results add to the extremely limited number of empirical studies that has been conducted to investigate e-commerce adoption in the context of Arabic countries.
Resumo:
The ability of automatic graphic user interface construction is described. It is based on the building of user interface as reflection of the data domain logical definition. The submitted approach to development of the information system user interface enables dynamic adaptation of the system during their operation. This approach is used for creation of information systems based on CASE-system METAS.
Resumo:
The purpose of the current paper is to present the developed methodology of viable model based enterprise management, which is needed for modern enterprises to survive and growth in the information age century. The approach is based on Beer’s viable system model and uses it as a basis of the information technology implementation and development. The enterprise is viewed as a cybernetic system which functioning is controlled from the same rules as for every living system.
Resumo:
We propose the adaptive algorithm for solving a set of similar scheduling problems using learning technology. It is devised to combine the merits of an exact algorithm based on the mixed graph model and heuristics oriented on the real-world scheduling problems. The former may ensure high quality of the solution by means of an implicit exhausting enumeration of the feasible schedules. The latter may be developed for certain type of problems using their peculiarities. The main idea of the learning technology is to produce effective (in performance measure) and efficient (in computational time) heuristics by adapting local decisions for the scheduling problems under consideration. Adaptation is realized at the stage of learning while solving a set of sample scheduling problems using a branch-and-bound algorithm and structuring knowledge using pattern recognition apparatus.
Resumo:
In this letter, a nonlinear semi-analytical model (NSAM) for simulation of few-mode fiber transmission is proposed. The NSAM considers the mode mixing arising from the Kerr effect and waveguide imperfections. An analytical explanation of the model is presented, as well as simulation results for the transmission over a two mode fiber (TMF) of 112 Gb/s using coherently detected polarization multiplexed quadrature phase-shift-keying modulation. The simulations show that by transmitting over only one of the two modes on TMFs, long-haul transmission can be realized without increase of receiver complexity. For a 6000-km transmission link, a small modal dispersion penalty is observed in the linear domain, while a significant increase of the nonlinear threshold is observed due to the large core of TMF. © 2006 IEEE.
Resumo:
There have been multifarious approaches in building expert knowledge in medical or engineering field through expert system, case-based reasoning, model-based reasoning and also a large-scale knowledge-based system. The intriguing factors with these approaches are mainly the choices of reasoning mechanism, ontology, knowledge representation, elicitation and modeling. In our study, we argue that the knowledge construction through hypermedia-based community channel is an effective approach in constructing expert’s knowledge. We define that the knowledge can be represented as in the simplest form such as stories to the most complex ones such as on-the-job type of experiences. The current approaches of encoding experiences require expert’s knowledge to be acquired and represented in rules, cases or causal model. We differentiate the two types of knowledge which are the content knowledge and socially-derivable knowledge. The latter is described as knowledge that is earned through social interaction. Intelligent Conversational Channel is the system that supports the building and sharing on this type of knowledge.
Resumo:
In the presented work the problem of generalized natural environment model of emergency monitoring is presented. The approach, based on using CASE-based technologies is proposed for methodology development in solving this problem. Usage of CASE-based technology and knowledge databases allow for quick and interactive monitoring of current natural environment state and allow to develop adequate model for just-in- time possible emergency modeling.
Resumo:
This work presents a model for development of project proposals by students as an approach to teaching information technology while promoting entrepreneurship and reflection. In teams of 3 to 5 participants, students elaborate a project proposal on a topic they have negotiated with each other and with the teacher. The project domain is related to the practical application of state-of-theart information technology in areas of substantial public interest or of immediate interest to the participants. This gives them ample opportunities for reflection not only on technical but also on social, economic, environmental and other dimensions of information technology. This approach has long been used with students of different years and programs of study at the Faculty of Mathematics and Informatics, Plovdiv University “Paisiy Hilendarski”. It has been found to develop all eight key competences for lifelong learning set forth in the Reference Framework and procedural skills required in real life.
Resumo:
A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.
Resumo:
Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.
Resumo:
M-Government services are now at the forefront of both user expectations and technology capabilities. Within the current setting, there is growing evidence that interoperability is becoming a key issue towards service sustainability. Thus, the objective of this chapter is to highlight the case of "Beyas Masa" - a Turkish application for infrastructure repair services. This application requires different stakeholders from different cultural background and geographically dispersed regions to work together. The major aim of this chapter to showcase experiences in as far as implementation and adoption of m-Government is concerned in the case of Turkey. The study utilizes the co-creation literature to investigate the factors influencing successful implementation of the Beyas Masa. This study reveals that initiatives are fragmented due to differences in the characteristics of the targeted audience, the marketing strategy, technology supply, distribution, and media utilized to promote its awareness. The chapter posits that in order to have affluent m-Government implementation in Turkey, it is important that many of the standalone applications are integrated to encourage interoperability and that socio-cultural behaviours should be re-shaped to encourage active engagement and interactive government service provisions that unlock the power of ICT.
Resumo:
Diabetes patients might suffer from an unhealthy life, long-term treatment and chronic complicated diseases. The decreasing hospitalization rate is a crucial problem for health care centers. This study combines the bagging method with base classifier decision tree and costs-sensitive analysis for diabetes patients' classification purpose. Real patients' data collected from a regional hospital in Thailand were analyzed. The relevance factors were selected and used to construct base classifier decision tree models to classify diabetes and non-diabetes patients. The bagging method was then applied to improve accuracy. Finally, asymmetric classification cost matrices were used to give more alternative models for diabetes data analysis.