21 resultados para Source code (Computer science)
em Digital Commons at Florida International University
Resumo:
This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^
Resumo:
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding and performance of computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. The goal of my dissertation is twofold: (1) identifying sources of computer science students’ difficulties with proofs by induction, and (2) developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book, I took into consideration the results of my study, as well as the drawbacks of the current methodologies of teaching proofs by induction for computer science. I designed my e-book to be used as a standalone and complete educational environment. I also conducted a study on the effectiveness of my e-book in the classroom. The results of my study suggest that, unlike the current methodologies of teaching proofs by induction for computer science, my e-book helped students overcome many of their difficulties and gain conceptual understanding of proofs induction.
Resumo:
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding of and performance with computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. The goal of my dissertation is twofold: 1. identifying sources of computer science students’ difficulties with proofs by induction, and 2. developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book, I took into consideration the results of my study, as well as the drawbacks of the current methodologies of teaching proofs by induction for computer science. I designed my e-book to be used as a standalone and complete educational environment. I also conducted a study on the effectiveness of my e-book in the classroom. The results of my study suggest that, unlike the current methodologies of teaching proofs by induction for computer science, my e-book helped students overcome many of their difficulties and gain conceptual understanding of proofs induction.
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^
Resumo:
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^
Resumo:
The increasing use of model-driven software development has renewed emphasis on using domain-specific models during application development. More specifically, there has been emphasis on using domain-specific modeling languages (DSMLs) to capture user-specified requirements when creating applications. The current approach to realizing these applications is to translate DSML models into source code using several model-to-model and model-to-code transformations. This approach is still dependent on the underlying source code representation and only raises the level of abstraction during development. Experience has shown that developers will many times be required to manually modify the generated source code, which can be error-prone and time consuming. ^ An alternative to the aforementioned approach involves using an interpreted domain-specific modeling language (i-DSML) whose models can be directly executed using a Domain Specific Virtual Machine (DSVM). Direct execution of i-DSML models require a semantically rich platform that reduces the gap between the application models and the underlying services required to realize the application. One layer in this platform is the domain-specific middleware that is responsible for the management and delivery of services in the specific domain. ^ In this dissertation, we investigated the problem of designing the domain-specific middleware of the DSVM to facilitate the bifurcation of the semantics of the domain and the model of execution (MoE) while supporting runtime adaptation and validation. We approached our investigation by seeking solutions to the following sub-problems: (1) How can the domain-specific knowledge (DSK) semantics be separated from the MoE for a given domain? (2) How do we define a generic model of execution (GMoE) of the middleware so that it is adaptable and realizes DSK operations to support delivery of services? (3) How do we validate the realization of DSK operations at runtime? ^ Our research into the domain-specific middleware was done using an i-DSML for the user-centric communication domain, Communication Modeling Language (CML), and for microgrid energy management domain, Microgrid Modeling Language (MGridML). We have successfully developed a methodology to separate the DSK and GMoE of the middleware of a DSVM that supports specialization for a given domain, and is able to perform adaptation and validation at runtime. ^
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as 1) closure or connectedness within the group, 2) bridging ties which extend outside of the group, and 3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^
Resumo:
The purpose of this study was to design a preventive scheme using directional antennas to improve the performance of mobile ad hoc networks. In this dissertation, a novel Directionality based Preventive Link Maintenance (DPLM) Scheme is proposed to characterize the performance gain [JaY06a, JaY06b, JCY06] by extending the life of link. In order to maintain the link and take preventive action, signal strength of data packets is measured. Moreover, location information or angle of arrival information is collected during communication and saved in the table. When measured signal strength is below orientation threshold , an orientation warning is generated towards the previous hop node. Once orientation warning is received by previous hop (adjacent) node, it verifies the correctness of orientation warning with few hello pings and initiates high quality directional link (a link above the threshold) and immediately switches to it, avoiding a link break altogether. The location information is utilized to create a directional link by orienting neighboring nodes antennas towards each other. We call this operation an orientation handoff, which is similar to soft-handoff in cellular networks. ^ Signal strength is the indicating factor, which represents the health of the link and helps to predict the link failure. In other words, link breakage happens due to node movement and subsequently reducing signal strength of receiving packets. DPLM scheme helps ad hoc networks to avoid or postpone costly operation of route rediscovery in on-demand routing protocols by taking above-mentioned preventive action. ^ This dissertation advocates close but simple collaboration between the routing, medium access control and physical layers. In order to extend the link, the Dynamic Source Routing (DSR) and IEEE 802.11 MAC protocols were modified to use the ability of directional antennas to transmit over longer distance. A directional antenna module is implemented in OPNET simulator with two separate modes of operations: omnidirectional and directional. The antenna module has been incorporated in wireless node model and simulations are performed to characterize the performance improvement of mobile ad hoc networks. Extensive simulations have shown that without affecting the behavior of the routing protocol noticeably, aggregate throughput, packet delivery ratio, end-to-end delay (latency), routing overhead, number of data packets dropped, and number of path breaks are improved considerably. We have done the analysis of the results in different scenarios to evaluate that the use of directional antennas with proposed DPLM scheme has been found promising to improve the performance of mobile ad hoc networks. ^
Resumo:
Software development is an extremely complex process, during which human errors are introduced and result in faulty software systems. It is highly desirable and important that these errors can be prevented and detected as early as possible. Software architecture design is a high-level system description, which embodies many system features and properties that are eventually implemented in the final operational system. Therefore, methods for modeling and analyzing software architecture descriptions can help prevent and reveal human errors and thus improve software quality. Furthermore, if an analyzed software architecture description can be used to derive a partial software implementation, especially when the derivation can be automated, significant benefits can be gained with regard to both the system quality and productivity. This dissertation proposes a framework for an integrated analysis on both of the design and implementation. To ensure the desirable properties of the architecture model, we apply formal verification by using the model checking technique. To ensure the desirable properties of the implementation, we develop a methodology and the associated tool to translate an architecture specification into an implementation written in the combination of Arch-Java/Java/AspectJ programming languages. The translation is semi-automatic so that many manual programming errors can be prevented. Furthermore, the translation inserting monitoring code into the implementation such that runtime verification can be performed, this provides additional assurance for the quality of the implementation. Moreover, validations for the translations from architecture model to program are provided. Finally, several case studies are experimented and presented.
Resumo:
Distributed applications are exposed as reusable components that are dynamically discovered and integrated to create new applications. These new applications, in the form of aggregate services, are vulnerable to failure due to the autonomous and distributed nature of their integrated components. This vulnerability creates the need for adaptability in aggregate services. The need for adaptation is accentuated for complex long-running applications as is found in scientific Grid computing, where distributed computing nodes may participate to solve computation and data-intensive problems. Such applications integrate services for coordinated problem solving in areas such as Bioinformatics. For such applications, when a constituent service fails, the application fails, even though there are other nodes that can substitute for the failed service. This concern is not addressed in the specification of high-level composition languages such as that of the Business Process Execution Language (BPEL). We propose an approach to transparently autonomizing existing BPEL processes in order to make them modifiable at runtime and more resilient to the failures in their execution environment. By transparent introduction of adaptive behavior, adaptation preserves the original business logic of the aggregate service and does not tangle the code for adaptive behavior with that of the aggregate service. The major contributions of this dissertation are: first, we assessed the effectiveness of BPEL language support in developing adaptive mechanisms. As a result, we identified the strengths and limitations of BPEL and came up with strategies to address those limitations. Second, we developed a technique to enhance existing BPEL processes transparently in order to support dynamic adaptation. We proposed a framework which uses transparent shaping and generative programming to make BPEL processes adaptive. Third, we developed a technique to dynamically discover and bind to substitute services. Our technique was evaluated and the result showed that dynamic utilization of components improves the flexibility of adaptive BPEL processes. Fourth, we developed an extensible policy-based technique to specify how to handle exceptional behavior. We developed a generic component that introduces adaptive behavior for multiple BPEL processes. Fifth, we identify ways to apply our work to facilitate adaptability in composite Grid services.