12 resultados para information grounds theory
em Digital Commons at Florida International University
Resumo:
Geographic Information Systems (GIS) is an emerging information technology (IT) which promises to have large scale influences in how spatially distributed resources are managed. It has had applications in the management of issues as diverse as recovering from the disaster of Hurricane Andrew to aiding military operations in Desert Storm. Implementation of GIS systems is an important issue because there are high cost and time involvement in setting them up. An important component of the implementation problem is the "meaning" different groups of people who are influencing the implementation give to the technology. The research was based on the theory of (theoretical stance to the problem was based on the) "Social Construction of Knowledge" systems which assumes knowledge systems are subject to sociological analysis both in usage and in content. An interpretive research approach was adopted to inductively derive a model which explains how the "meanings" of a GIS are socially constructed. The research design entailed a comparative case analysis over two county sites which were using the same GIS for a variety of purposes. A total of 75 in-depth interviews were conducted to elicit interpretations of GIS. Results indicate that differences in how geographers and data-processors view the technology lead to different implementation patterns in the two sites.
Resumo:
This dissertation examines the consequences of Electronic Data Interchange (EDI) use on interorganizational relations (IR) in the retail industry. EDI is a type of interorganizational information system that facilitates the exchange of business documents in structured, machine processable form. The research model links EDI use and three IR dimensions--structural, behavioral, and outcome. Based on relevant literature from organizational theory and marketing channels, fourteen hypotheses were proposed for the relationships among EDI use and the three IR dimensions.^ Data were collected through self-administered questionnaires from key informants in 97 retail companies (19% response rate). The hypotheses were tested using multiple regression analysis. The analysis supports the following hypothesis: (a) EDI use is positively related to information intensity and formalization, (b) formalization is positively related to cooperation, (c) information intensity is positively related to cooperation, (d) conflict is negatively related to performance and satisfaction, (e) cooperation is positively related to performance, and (f) performance is positively related to satisfaction. The results support the general premise of the model that the relationship between EDI use and satisfaction among channel members has to be viewed within an interorganizational context.^ Research on EDI is still in a nascent stage. By identifying and testing relevant interorganizational variables, this study offers insights for practitioners managing boundary-spanning activities in organizations using or planning to use EDI. Further, the thesis provides avenues for future research aimed at understanding the consequences of this interorganizational information technology. ^
Resumo:
The ultimate intent of this dissertation was to broaden and strengthen our understanding of IT implementation by emphasizing research efforts on the dynamic nature of the implementation process. More specifically, efforts were directed toward opening the "black box" and providing the story that explains how and why contextual conditions and implementation tactics interact to produce project outcomes. In pursuit of this objective, the dissertation was aimed at theory building and adopted a case study methodology combining qualitative and quantitative evidence. Precisely, it examined the implementation process, use and consequences of three clinical information systems at Jackson Memorial Hospital, a large tertiary care teaching hospital.^ As a preliminary step toward the development of a more realistic model of system implementation, the study proposes a new set of research propositions reflecting the dynamic nature of the implementation process.^ Findings clearly reveal that successful implementation projects are likely to be those where key actors envision end goals, anticipate challenges ahead, and recognize the presence of and seize opportunities. It was also found that IT implementation is characterized by the systems theory of equifinality, that is, there are likely several equally effective ways to achieve a given end goal. The selection of a particular implementation strategy appears to be a rational process where actions and decisions are largely influenced by the degree to which key actors recognize the mediating role of each tactic and are motivated to action. The nature of the implementation process is also characterized by the concept of "duality of structure," that is, context and actions mutually influence each other. Another key finding suggests that there is no underlying program that regulates the process of change and moves it form one given point toward a subsequent and already prefigured end. For this reason, the implementation process cannot be thought of as a series of activities performed in a sequential manner such as conceived in stage models. Finally, it was found that IT implementation is punctuated by a certain indeterminacy. Results suggest that only when substantial efforts are focused on what to look for and think about, it is less likely that unfavorable and undesirable consequences will occur. ^
Resumo:
This dissertation examines one category of international capital flows, private portfolio investments (private refers to the source of capital). There is an overall lack of a coherent and consistent definition of foreign portfolio investment. We clarify these definitional issues.^ Two main questions that pertain to private foreign portfolio investments (FPI) are explored. The first problem is the phenomenon of home preference, often referred to as home bias. Related to this are the observed cross-investment flows between countries that seem to contradict the textbook rendition of private FPI. A description of the theories purporting to resolve the home preference puzzle (and the cross-investment one) are summarized and evaluated. Most of this literature considers investors from major developed countries. I consider--as well--whether investors in less developed countries have home preference.^ The dissertation shows that home preference is indeed pervasive and profound across countries, in both developed and emerging markets. For the U.S., I examine home bias in both equity and bond holdings as well. I find that home bias is greater when we look at equity and bond holdings than equity holdings solely.^ In this dissertation a model is developed to explain home bias. This model is original and fills a gap in the literature as there have been no satisfactory models that handle at the same time both home preference and cross-border holdings in the context of information asymmetries. This model reflects what we see in the data and permits us to reach certain results by the use of comparative statics methods. The model suggests, counter-intuitively, that as the rate of return in a country relative to the world rate of return increases, home preference decreases. In the context of our relatively simple model we ascribe this result to the higher variance of the now higher return for home assets. We also find, this time as intended, that as risk aversion increases, investors diversify further so that home preference decreases.^ The second question that the dissertation deals with is the volatility of private foreign portfolio investment. Countries that are recipients of these flows have been wary of such flows because of their perceived volatility. Often the contrast is made with the perceived absence of volatility in foreign direct investment flows. I analyze the validity of these concerns using first net flow data and then gross flow data. The results show that FPI is not, in relative terms, more volatile than other flows in our sample of eight countries (half were developed countries and the rest were emerging markets).^ The implication therefore is that restricting FPI flows may be harmful in the sense that private capital may not be allocated efficiently worldwide to the detriment of capital poor economies. More to the point, any such restrictions would in fact be misguided. ^
Resumo:
Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^
Resumo:
Nursing shortages still exist in the U.S. so it is important to determine factors that influence decisions to pursue nursing as a career. This comparative, correlational research study revealed factors that may contribute to, or deter students from choosing nursing as a career. The purpose of this study was to determine factors that contribute to a career choice for nursing based on the concepts of social cognitive career theory (SCCT), self efficacy, outcome expectations, and personal goals, among senior high school students, final year nursing students, and first year nursing students. Based on the results strategies may be developed to recruit a younger pool of students to the nursing profession and to boost retention efforts among those who already made a career choice in nursing. Data were collected using a three part questionnaire developed by the researcher to obtain demographic information and data about the respondents' self efficacy, outcome expectations, and personal goals with regards to nursing as a career. Point bi-serial correlations were used to determine relationships between the variables. ANOVAs and ANCOVAs were computed to determine differences in self efficacy and outcome expectations, among the three groups. Additional descriptive data determined reasons for and against a choice for nursing as a career. Self efficacy and outcome expectations were significantly correlated to career choice among all three groups. The nursing students had higher self efficacy perceptions than the high school students. There were no significant differences in outcome expectations between the three groups. The main categories identified as reasons for choosing nursing as a career were; (a) caring, (b) career and educational advancement, (c) personal accomplishment, (d) proficiency and love of the medical field. Common categories identified for not choosing nursing as a career were; (a) responsibility, (b) liability, (c) lack of respect, and (d) low salary. Other categories regarding not choosing nursing as a career included; (a) the nursing program and (b) professional (c) alternate career choice options and (d) fear of sickness and death. Findings from this study support the tenets of SCCT and may be used to recruit and retain nurses and develop curricula.
Resumo:
Type systems for secure information flow aim to prevent a program from leaking information from H (high) to L (low) variables. Traditionally, bisimulation has been the prevalent technique for proving the soundness of such systems. This work introduces a new proof technique based on stripping and fast simulation, and shows that it can be applied in a number of cases where bisimulation fails. We present a progressive development of this technique over a representative sample of languages including a simple imperative language (core theory), a multiprocessing nondeterministic language, a probabilistic language, and a language with cryptographic primitives. In the core theory we illustrate the key concepts of this technique in a basic setting. A fast low simulation in the context of transition systems is a binary relation where simulating states can match the moves of simulated states while maintaining the equivalence of low variables; stripping is a function that removes high commands from programs. We show that we can prove secure information flow by arguing that the stripping relation is a fast low simulation. We then extend the core theory to an abstract distributed language under a nondeterministic scheduler. Next, we extend to a probabilistic language with a random assignment command; we generalize fast simulation to the setting of discrete time Markov Chains, and prove approximate probabilistic noninterference. Finally, we introduce cryptographic primitives into the probabilistic language and prove computational noninterference, provided that the underling encryption scheme is secure.
Resumo:
The theoretical foundation of this study comes from the significant recurrence throughout the leadership literature of two distinct behaviors, task orientation and relationship orientation. Task orientation and relationship orientation are assumed to be generic behaviors, which are universally observed and applied in organizations, even though they may be uniquely enacted in organizations across cultures. The lack of empirical evidence supporting these assumptions provided the impetus to hypothetically develop and empirically confirm the universal application of task orientation and relationship orientation and the generalizability of their measurement in a cross-cultural setting. Task orientation and relationship orientation are operationalized through consideration and initiation of structure, two well-established theoretical leadership constructs. Multiple-group mean and covariance structures (MACS) analyses are used to simultaneously validate the generalizability of the two hypothesized constructs across the 12 cultural groups and to assess whether the similarities and differences discovered are measurement and scaling artifacts or reflect true cross-cultural differences. The data were collected by the author and others as part of a larger international research project. The data are comprised of 2341 managers from 12 countries/regions. The results provide compelling evidence that task orientation and relationship orientation, reliably and validly operationalized through consideration and initiation of structure, are generalizable across the countries/regions sampled. But the results also reveal significant differences in the perception of these behaviors, suggesting that some aspects of task orientation and relationship orientation are strongly affected by cultural influences. These (similarities and) differences reflect directly interpretable, error-free effects among the constructs at the behavioral level. Thus, task orientation and relationship orientation can demonstrate different relations among cultures, yet still be defined equivalently across the 11 cultures studied. The differences found in this study are true differences and may contain information about cultural influences characterizing each cultural context (i.e. group). The nature of such influences should be examined before the results can be meaningfully interpreted. To examine the effects of cultural characteristics on the constructs, additional hypotheses on the constructs' latent parameters can be tested across groups. Construct-level tests are illustrated in hypothetical examples in light of the study's results. The study contributes significantly to the theoretical understanding of the nature and generalizability of psychological constructs. The theoretical and practical implications of embedding context into a unified theory of task orientated and relationship oriented leader behavior are proposed. Limitations and contributions are also discussed. ^
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
Bullying is a growing problem in all organizations. This paper will examine how transformational theory can be used to understand victims who are being bullied in the workplace. This research will provide useful information regarding all aspects of bullying and how it relates to this theory.
Resumo:
In any environment, group dynamics would exist. How we deal with it in a competitive work environment defines who we are using transformative learning. This paper provides useful information from a number of theorists who share perspectives on the complex nature of groups.