38 resultados para technology acceptance model
em Cochin University of Science
Resumo:
Indian economy is witnessing stellar growth over the last few years. There have been rapid developments in infrastructural and business front during the growth period.Internet adoption among Indians has been increasing over the last one decade.Indian banks have also risen to the occasion by offering new channels of delivery to their customers.Internet banking is one such new channel which has become available to Indian customers.Customer acceptance for internet banking has been good so far.In this study the researcher tried to conduct a qualitative and quantitative investigation of internet banking customer acceptance among Indians. The researcher tried to identify important factors that affect customer's behavioral intention for internet banking .The researcher also proposes a research model which has extended from Technology Acceptance Model for predicting internet banking acceptance.The findings of the study would be useful for Indian banks in planning and upgrading their internet banking service.Banks could increase internet banking adoption by making their customer awareness about the usefulness of the service.It is seen that from the study that the variable perceived usefulness has a positive influence on internet banking use,therefore internet banking acceptance would increase when customers find it more usefulness.Banks should plan their marketing campaigns taking into consideration this factor.Proper marketing communications which would increase consumer awareness would result in better acceptance of internet banking.The variable perceived ease of use had a positive influence on internet banking use.That means customers would increase internet banking usage when they find it easier to use.Banks should therefore try to develop their internet banking site and interface easier to use.Banks could also consider providing practical training sessions for customers at their branches on usage of internet banking interface.
Resumo:
The assessment of maturity of software is an important area in the general software sector. The field of OSS also applies various models to measure software maturity. However, measuring maturity of OSS being used for several applications in libraries is an area left with no research so far. This study has attempted to fill the research gap. Measuring maturity of software contributes knowledge on its sustainability over the long term. Maturity of software is one of the factors that positively influence adoption. The investigator measured the maturity of DSpace software using Woods and Guliani‟s Open Source Maturity Model-2005. The present study is significant as it addresses the aspects of maturity of OSS for libraries and fills the research gap on the area. In this sense the study opens new avenues to the field of library and information science by providing an additional tool for librarians in the selection and adoption of OSS. Measuring maturity brings in-depth knowledge on an OSS which will contribute towards the perceived usefulness and perceived ease of use as explained in the Technology Acceptance Model theory.
Resumo:
In this Letter a new physical model for metal-insulatormetal CMOS capacitors is presented. In the model the parameters of the circuit are derived from the physical structural details. Physical behaviors due to metal skin effect and inductance have been considered. The model has been confirmed by 3D EM simulator and design rules proposed. The model presented is scalable with capacitor geometry, allowing designers to predict and optimize quality factor. The approach has been verified for MIM CMOS capacitors
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
The mechanism of devulcanization of sulfur-vulcanized natural rubber with aromatic disulfides and aliphatic amines has been studied using 23-dimethyl-2-butene (C5H1,) as a low-molecular weight model compound. First C6H12 was vulcanized with a mixture of sulfur, zinc stearate and N-cyclohexyl-2-benzothiazylsulfenamide (CBS) as accelerator at 140 °C, resulting in a mixture of addition products (C(,H 1 i-S,-C5H 1 i ). The compounds were isolated and identified by High Performance Liquid Chromatography (HPLC) with respect to their various sulfur ranks. In it second stage, the vulcanized products were devulcanized using the agents mentioned above at 200 °C. The kinetics and chemistry of the breakdown of the sulfur-hridges were monitored. Both devulcanization agents decompose sulfidic vulcanization products with sulfur ranks equal or higher than 3 quite effectively and with comparable speed. Di phenyldisulfide as devulcanization agent gives rise to a high amount of mono- and disulfidic compounds formed during the devulcanization, hexadecylamine, as devulcanization agent, prevents these lower sulfur ranks from being formed.
Science and technology of rubber reclamation with special attention to NR-based waste latex products
Resumo:
A comprehensive overview of reclamation of cured rubber with special emphasis on latex reclamation is depicted in this paper. The latex industry has expanded over the years to meet the world demands for gloves, condoms, latex thread, etc. Due to the strict specifications for the products and the unstable nature of the latex as high as 15% of the final latex products are rejected. As waste latex rubber (WLR) represents a source of high-quality rubber hydrocarbon, it is a potential candidate for generating reclaimed rubber of superior quality. The role of the different components in the reclamation recipe is explained and the reaction mechanism and chemistry during reclamation are discussed in detail. Different types of reclaiming processes are described with special reference to processes, which selectively cleave the cross links in the vulcanized rubber. The state-of-the-art techniques of reclamation with special attention on latex treatment are reviewed. An overview of the latest development concerning the fundamental studies in the field of rubber recycling by means of low-molecular weight compounds is described. A mathematical model description of main-chain and crosslink scission during devulcanization of a rubber vulcanizate is also given.
Studies on Pseudoscalar Meson Bound States and Semileptonic Decays in a Relativistic Potential Model
Resumo:
In this thesis quark-antiquark bound states are considered using a relativistic two-body equation for Dirac particles. The mass spectrum of mesons includes bound states involving two heavy quarks or one heavy and one light quark. In order to analyse these states within a unified formalism, it is desirable to have a two-fermion equation that limits to one body Dirac equation with a static interaction for the light quark when the other particle's mass tends to infinity. A suitable two-body equation has been developed by Mandelzweig and Wallace. This equation is solved in momentum space and is used to describe the complete spectrum of mesons. The potential used in this work contains a short range one-gluon exchange interaction and a long range linear confining and constant potential terms. This model is used to investigate the decay processes of heavy mesons. Semileptonic decays are more tractable since there is no final state interactions between the leptons and hadrons that would otherwise complicate the situation. Studies on B and D meson decays are helpful to understand the nonperturbative strong interactions of heavy mesons, which in turn is useful to extract the details of weak interaction process. Calculation of form factors of these semileptonic decays of pseudo scalar mesons are also presented.
Resumo:
Electron-phonon interaction is considered within the framework of the fluctuating valence of Cu atoms. Anderson's lattice Hamiltonian is suitably modified to take this into account. Using Green's function technique tbe possible quasiparticle excitations' are determined. The quantity 2delta k(O)/ kB Tc is calculated for Tc= 40 K. The calculated values are in good agreement with the experimental results.
Resumo:
In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.
Resumo:
This thesis presents the methodology of linking Total Productive Maintenance (TPM) and Quality Function Deployment (QFD). The Synergic power ofTPM and QFD led to the formation of a new maintenance model named Maintenance Quality Function Deployment (MQFD). This model was found so powerful that, it could overcome the drawbacks of TPM, by taking care of customer voices. Those voices of customers are used to develop the house of quality. The outputs of house of quality, which are in the form of technical languages, are submitted to the top management for making strategic decisions. The technical languages, which are concerned with enhancing maintenance quality, are strategically directed by the top management towards their adoption of eight TPM pillars. The TPM characteristics developed through the development of eight pillars are fed into the production system, where their implementation is focused towards increasing the values of the maintenance quality parameters, namely overall equipment efficiency (GEE), mean time between failures (MTBF), mean time to repair (MTIR), performance quality, availability and mean down time (MDT). The outputs from production system are required to be reflected in the form of business values namely improved maintenance quality, increased profit, upgraded core competence, and enhanced goodwill. A unique feature of the MQFD model is that it is not necessary to change or dismantle the existing process ofdeveloping house ofquality and TPM projects, which may already be under practice in the company concerned. Thus, the MQFD model enables the tactical marriage between QFD and TPM.First, the literature was reviewed. The results of this review indicated that no activities had so far been reported on integrating QFD in TPM and vice versa. During the second phase, a survey was conducted in six companies in which TPM had been implemented. The objective of this survey was to locate any traces of QFD implementation in TPM programme being implemented in these companies. This survey results indicated that no effort on integrating QFD in TPM had been made in these companies. After completing these two phases of activities, the MQFD model was designed. The details of this work are presented in this research work. Followed by this, the explorative studies on implementing this MQFD model in real time environments were conducted. In addition to that, an empirical study was carried out to examine the receptivity of MQFD model among the practitioners and multifarious organizational cultures. Finally, a sensitivity analysis was conducted to find the hierarchy of various factors influencing MQFD in a company. Throughout the research work, the theory and practice of MQFD were juxtaposed by presenting and publishing papers among scholarly communities and conducting case studies in real time scenario.