954 resultados para Model information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

While many studies have explored conditions and consequences of information systems adoption and use, few have focused on the final stages of the information system lifecycle. In this paper, I develop a theoretical and an initial empirical contribution to understanding individuals’ intentions to discontinue the use of an information system. This understanding is important because it yields implications about maintenance, retirement, and users’ switching decisions, which ultimately can affect work performance, system effectiveness, and return on technology investments. In this paper, I offer a new conceptualization of factors determining users’ intentions to discontinue the use of information systems. I then report on a preliminary empirical test of the model using data from a field study of information system users in a promotional planning routine in a large retail organization. Results from the empirical analysis provide first empirical support for the theoretical model. I discuss the work’s implications for theory on information systems continuance and dual-factor logic in information system use. I also provide suggestions for managers dealing with cessation of information systems and broader work routine change in organizations due to information system end-of-life decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining involves nontrivial process of extracting knowledge or patterns from large databases. Genetic Algorithms are efficient and robust searching and optimization methods that are used in data mining. In this paper we propose a Self-Adaptive Migration Model GA (SAMGA), where parameters of population size, the number of points of crossover and mutation rate for each population are adaptively fixed. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions and a set of actual classification datamining problems. Michigan style of classifier was used to build the classifier and the system was tested with machine learning databases of Pima Indian Diabetes database, Wisconsin Breast Cancer database and few others. The performance of our algorithm is better than others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the issue of rate-distortion (R/D) performance optimality of the recently proposed switched split vector quantization (SSVQ) method. The distribution of the source is modeled using Gaussian mixture density and thus, the non-parametric SSVQ is analyzed in a parametric model based framework for achieving optimum R/D performance. Using high rate quantization theory, we derive the optimum bit allocation formulae for the intra-cluster split vector quantizer (SVQ) and the inter-cluster switching. For the wide-band speech line spectrum frequency (LSF) parameter quantization, it is shown that the Gaussian mixture model (GMM) based parametric SSVQ method provides 1 bit/vector advantage over the non-parametric SSVQ method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of casemix funding for Australian acute health care services has challenged Social Work to demonstrate clear reporting mechanisms, demonstrate effective practice and to justify interventions provided. The term 'casemix' is used to describe the mix and type of patients treated by a hospital or other health care services. There is wide acknowledgement that the procedure-based system of Diagnosis Related Groupings (DRGs) is grounded in a medical/illness perspective and is unsatisfactory in describing and predicting the activity of Social Work and other allied health professions in health care service delivery. The National Allied Health Casemix Committee was established in 1991 as the peak body to represent allied health professions in matters related to casemix classification. This Committee has pioneered a nationally consistent, patient-centred information system for allied health. This paper describes the classification systems and codes developed for Social Work, which includes a minimum data set, a classification hierarchy, the set of activity (input) codes and 'indicator for intervention' codes. The advantages and limitations of the system are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NK model, proposed by Kauffman (1993), is a strong simulation framework to study competing dynamics. It has been applied in some social science fields, for instance, organization science. However, like many other simulation methods, NK model has not received much attention from Management Information Systems (MIS) discipline. This tutorial, thus, is trying to introduce NK model in a simple way and encourage related studies. To demonstrate how NK model works, this tutorial reproduces several Levinthal’s (1997) experiments. Besides, this tutorial attempts to make clear the relevance between NK model and agent-based modeling (ABM). The relevance can be a theoretical basis to further develop NK model framework for other research scenarios. For example, this tutorial provides an NK model solution to study IT value cocreation process by extending network structure and agent interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ProFacil model is a generic process model defined as a framework model showing the links between the facilities management process and the building end user’s business process. The purpose of using the model is to support more detailed process modelling. The model has been developed using the IDEF0 modelling method. The ProFacil model describes business activities from the generalized point of view as management-, support-, and core processes and their relations. The model defines basic activities in the provision of a facility. Examples of these activities are “operate facilities”, “provide new facilities”, “provide re-build facilities”, “provide maintained facilities” and “perform dispose of facilities”. These are all generic activities providing a basis for a further specialisation of company specific FM activities and their tasks. A facilitator can establish a specialized process model using the ProFacil model and interacting with company experts to describe their company’s specific processes. These modelling seminars or interviews will be done in an informal way, supported by the high-level process model as a common reference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the single building information model has existed for at least thirty years and various standards have been published leading up to the ten-year development of the Industry Foundation Classes. These have been initiatives from researchers, software developers and standards committees. Now large property owners are becoming aware of the benefits of moving IT tools from specific applications towards more comprehensive solutions. This study addresses the state of Building Information Models and the conditions necessary for them to become more widely used. It is a qualitative study based on information from a number of international experts and has asked a series of questions about the feasibility of BIMs, the conditions necessary for their success, and the role of standards with particular reference to the IFCs. Some key statements were distilled from the diverse answers received and indicate that BIM solutions appear too complex for many and may need to be applied in limited areas initially. Standards are generally supported but not applied rigorously and a range of these are relevant to BIM. Benefits will depend upon the building procurement methods used and there should be special roles within the project team to manage information. Case studies are starting to appear and these could be used for publicity. The IFCs are rather oversold and their complexities should be hidden within simple-to-use software. Inevitably major questions remain and property owners may be the key to answering some of these. A framework for presenting standards, backed up by case studies of successful projects, is the solution proposed to provide better information on where particular BIM standards and solutions should be applied in building projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses the scope of research on the application of information technology in construction (ITC). A model of the information and material activities which together constitute the construction process is presented, using the IDEF0 activity modelling methodology. Information technology is defined to include all kinds of technology used for the storage, transfer and manipulation of information, thus also including devices such as copying machines, faxes and mobile phones. Using the model the domain of ITC research is defined as the use of information technology to facilitate and re-engineer the information process component of construction. Developments during the last decades in IT use in construction is discussed against a background of a simplified model of generic information processing tasks. The scope of ITC is compared with the scopes of research in related areas such as design methodology, construction management and facilities management. Health care is proposed as an interesting alternative (to the often used car manufacturing industry), as an IT application domain to compare with. Some of the key areas of ITC research in recent years; expert systems, company IT strategies, and product modelling are shortly discussed. The article finishes with a short discussion of the problems of applying standard scientific methodology in ITC research, in particular in product model research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A functioning stock market is an essential component of a competitive economy, since it provides a mechanism for allocating the economy’s capital stock. In an ideal situation, the stock market will steer capital in a manner that maximizes the total utility of the economy. As prices of traded stocks depend on and vary with information available to investors, it is apparent that information plays a crucial role in a functioning stock market. However, even though information indisputably matters, several issues regarding how stock markets process and react to new information still remain unanswered. The purpose of this thesis is to explore the link between new information and stock market reactions. The first essay utilizes new methodological tools in order to investigate the average reaction of investors to new financial statement information. The second essay explores the behavior of different types of investors when new financial statement information is disclosed to the market. The third essay looks into the interrelation between investor size, behavior and overconfidence. The fourth essay approaches the puzzle of negative skewness in stock returns from an altogether different angle than previous studies. The first essay presents evidence of the second derivatives of some financial statement signals containing more information than the first derivatives. Further, empirical evidence also indicates that some of the investigated signals proxy risk while others contain information priced with a delay. The second essay documents different categories of investors demonstrating systematical differences in their behavior when new financial statement information arrives to the market. In addition, a theoretical model building on differences in investor overconfidence is put forward in order to explain the observed behavior. The third essay shows that investor size describes investor behavior very well. This finding is predicted by the model proposed in the second essay, and hence strengthens the model. The behavioral differences between investors of different size furthermore have significant economic implications. Finally, the fourth essay finds strong evidence of management news disclosure practices causing negative skewness in stock returns.