13 resultados para profitability analyzing
em Digital Commons at Florida International University
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
This dissertation is a study of customer relationship management theory and practice. Customer Relationship Management (CRM) is a business strategy whereby companies build strong relationships with existing and prospective customers with the goal of increasing organizational profitability. It is also a learning process involving managing change in processes, people, and technology. CRM implementation and its ramifications are also not completely understood as evidenced by the high number of failures in CRM implementation in organizations and the resulting disappointments. ^ The goal of this dissertation is to study emerging issues and trends in CRM, including the effect of computer software and the accompanying new management processes on organizations, and the dynamics of the alignment of marketing, sales and services, and all other functions responsible for delivering customers a satisfying experience. ^ In order to understand CRM better a content analysis of more than a hundred articles and documents from academic and industry sources was undertaken using a new methodological twist to the traditional method. An Internet domain name (http://crm.fiu.edu) was created for the purpose of this research by uploading an initial one hundred plus abstracts of articles and documents onto it to form a knowledge database. Once the database was formed a search engine was developed to enable the search of abstracts using relevant CRM keywords to reveal emergent dominant CRM topics. The ultimate aim of this website is to serve as an information hub for CRM research, as well as a search engine where interested parties can enter CRM-relevant keywords or phrases to access abstracts, as well as submit abstracts to enrich the knowledge hub. ^ Research questions were investigated and answered by content analyzing the interpretation and discussion of dominant CRM topics and then amalgamating the findings. This was supported by comparisons within and across individual, paired, and sets-of-three occurrences of CRM keywords in the article abstracts. ^ Results show that there is a lack of holistic thinking and discussion of CRM in both academics and industry which is required to understand how the people, process, and technology in CRM impact each other to affect successful implementation. Industry has to get their heads around CRM and holistically understand how these important dimensions affect each other. Only then will organizational learning occur, and overtime result in superior processes leading to strong profitable customer relationships and a hard to imitate competitive advantage. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. ^ To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. ^ Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. ^ The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance. ^
Resumo:
In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^
Resumo:
This study examines the effect of Microteaching Lesson Study participation on the mathematics content and pedagogical content knowledge of 52 elementary preservice teachers. Preliminary findings, which are positive, are discussed.
Resumo:
This article explores the advantages and disadvantages of various financing alternatives for the hospitality industry and discusses their potential effect on the profitability of the firm. The author provides background and perspective for developing the appropriate financing arrangement for a specific hospitality enterprise.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
There is a national need to increase the STEM-related workforce. Among factors leading towards STEM careers include the number of advanced high school mathematics and science courses students complete. Florida's enrollment patterns in STEM-related Advanced Placement (AP) courses, however, reveal that only a small percentage of students enroll into these classes. Therefore, screening tools are needed to find more students for these courses, who are academically ready, yet have not been identified. The purpose of this study was to investigate the extent to which scores from a national standardized test, Preliminary Scholastic Assessment Test/ National Merit Qualifying Test (PSAT/NMSQT), in conjunction with and compared to a state-mandated standardized test, Florida Comprehensive Assessment Test (FCAT), are related to selected AP exam performance in Seminole County Public Schools. An ex post facto correlational study was conducted using 6,189 student records from the 2010 - 2012 academic years. Multiple regression analyses using simultaneous Full Model testing showed differential moderate to strong relationships between scores in eight of the nine AP courses (i.e., Biology, Environmental Science, Chemistry, Physics B, Physics C Electrical, Physics C Mechanical, Statistics, Calculus AB and BC) examined. For example, the significant unique contribution to overall variance in AP scores was a linear combination of PSAT Math (M), Critical Reading (CR) and FCAT Reading (R) for Biology and Environmental Science. Moderate relationships for Chemistry included a linear combination of PSAT M, W (Writing) and FCAT M; a combination of FCAT M and PSAT M was most significantly associated with Calculus AB performance. These findings have implications for both research and practice. FCAT scores, in conjunction with PSAT scores, can potentially be used for specific STEM-related AP courses, as part of a systematic approach towards AP course identification and placement. For courses with moderate to strong relationships, validation studies and development of expectancy tables, which estimate the probability of successful performance on these AP exams, are recommended. Also, findings established a need to examine other related research issues including, but not limited to, extensive longitudinal studies and analyses of other available or prospective standardized test scores.
Resumo:
In this paper, I analyze the processes that transformed aquaculture into a major export industry in India, in order to understand the role aquaculture has as a social and economic development strategy of the Indian state. The discussion employs a political ecology approach as I explore the complex relations between access and control of India’s coastal resources, society, and economy. I argue that many of the forces that initiated the development of aquaculture in India, namely the involvement of the Indian state, continue to shape the industry today. I also discuss how despite widespread social conflict, the shrimp farming industry, in particular, continues to thrive and grow in rural India. My analysis utilizes ethnographic and archival data collected over the course of 9 months of fieldwork in Tamil Nadu, India. Keywords: rural development, aquaculture, India, State, ethnography
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance.
Resumo:
The tidal influence on the Big Pine Key saltwater/freshwater interface was analyzed using time-lapse electrical resistivity imaging and shallow well measurements. The transition zone at the saltwater/freshwater interface was measured over part of a tidal cycle along three profiles. The resistivity was converted to salinity by deriving a formation factor for the Miami Oolite. A SEAWAT model was created to attempt to recreate the field measurements and test previously established hydrogeologic parameters. The results imply that the tide only affects the groundwater within 20 to 30 m of the coast. The effect is small and caused by flooding from the high tide. The low relief of the island means this effect is very sensitive to small changes in the magnitude. The SEAWAT model proved to be insufficient in modeling this effect. The study suggests that the extent of flooding is the largest influence on the salinity of the groundwater.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.