938 resultados para Concurrent programming
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Resumo:
In the computer science community, there is considerable debate about the appropriate sequence for introducing object-oriented concepts to novice programmers. Research into novice programming has struggled to identify the critical aspects that would provide a consistently successful approach to teaching introductory object-oriented programming. Starting from the premise that the conceptions of a task determine the type of output from the task, assisting novice programmers to become aware of what the required output should be, may lay a foundation for improving learning. This study adopted a phenomenographic approach. Thirty one practitioners were interviewed about the ways in which they experience object-oriented programming and categories of description and critical aspects were identified. These critical aspects were then used to examine the spaces of learning provided in twenty introductory textbooks. The study uncovered critical aspects that related to the way that practitioners expressed their understanding of an object-oriented program and the influences on their approach to designing programs. The study of the textbooks revealed a large variability in the cover of these critical aspects.
Resumo:
This paper introduces the theory of algorithm visualization and its education-related results obtained so far, then an algorithm visualization tool is going to be presented as an example, which we will finally evaluate. This article illustrates furthermore how algorithm visualization tools can be used by teachers and students during the teaching and learning process of programming, and equally evaluates teaching and learning methods. Two tools will be introduced: Jeliot and TRAKLA2.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
In the 1980s, government agencies sought to utilize research on drug use prevention to design media campaigns. Enlisting the assistance of the national media, several campaigns were designed and initiated to bring anti-drug use messages to adolescents in the form of public service advertising. This research explores the sources of information selected by adolescents in grades 7 through 12 and how the selection of media and other sources of information relate to drug use behavior and attitudes and perceptions related to risk/harm and disapproval of friends' drug-using activities.^ Data collected from 1989 to 1992 in the Miami Coalition School Survey provided a random selection of secondary school studies. The responses of these students were analyzed using multivariate statistical techniques.^ Although many of the students selected media as the source for most of their information on the effects of drugs on the people who use them, the selection of media was found to be positively related to alcohol use and negatively related to marijuana use. The selection of friends, brothers, or sisters was a statistically significant source for adolescents who smoke cigarettes, use alcohol or marijuana.^ The results indicate that the anti-drug use messages received by students may be canceled out by media messages perceived to advocate substance use and that a more persuasive source of information for adolescents may be friends and siblings. As federal reports suggest that the economic costs of drug abuse will reach an estimated $150 billion by 1997 if current trends continue, prevention policy that addresses the glamorization of substance use remains a national priority. Additionally, programs that advocate prevention within the peer cluster must be supported, as peers are an influential source for both inspiring and possibly preventing drug use behavior. ^
Resumo:
This series of 5 single-subject studies used the operant conditioning paradigm to investigate, within the two-way influence process, how (a) contingent infant attention can reinforce maternal verbal behaviors during a period of mother-infant interaction and under subsequent experimental manipulation. Differential reinforcement was used to determine if it is possible that an infant attending to the mother (denoted by head-turns towards the image of the mother plus eye contact) increases (reinforces) the mother's verbal response (to a cue from the infant) upon which the infant behavior is contingent. There was also (b) an evaluation during the contrived parent-infant interaction for concurrent operant learning of infant vocal behavior via contingent verbal responding (reinforcement) implemented by the mother. Further, it was noted (c) whether or not the mother reported being aware that her responses were influenced by the infant's behavior. Findings showed: the operant conditioning of the maternal verbal behaviors were reinforced by contingent infant attention; and the operant conditioning of infant vocalizations was reinforced by contingent maternal verbal behaviors. No parent reported (1) being aware of the increase in their verbal response reinforced during operant conditioning of parental behavior nor a decrease in those responses during the DRA reversal phase, or (2) noticing a contingency between infant's and mother's response. By binomial 1-tail tests, the verbal-behavior patterns of the 5 mothers were conditioned by infant reinforcement (p < 0.02) and, concurrently, the vocal-response patterns of the 5 infants were conditioned by maternal reinforcement (p < 0.02). A program of systematic empirical research on the determinants of concurrent conditioning within mother-child interaction may provide a way to evaluate the differential effectiveness of interventions aimed at improving parent-child interactions. The work conducted in the present study is one step in this direction. ^
Resumo:
This research focuses on developing a capacity planning methodology for the emerging concurrent engineer-to-order (ETO) operations. The primary focus is placed on the capacity planning at sales stage. This study examines the characteristics of capacity planning in a concurrent ETO operation environment, models the problem analytically, and proposes a practical capacity planning methodology for concurrent ETO operations in the industry. A computer program that mimics a concurrent ETO operation environment was written to validate the proposed methodology and test a set of rules that affect the performance of a concurrent ETO operation. ^ This study takes a systems engineering approach to the problem and employs systems engineering concepts and tools for the modeling and analysis of the problem, as well as for developing a practical solution to this problem. This study depicts a concurrent ETO environment in which capacity is planned. The capacity planning problem is modeled into a mixed integer program and then solved for smaller-sized applications to evaluate its validity and solution complexity. The objective is to select the best set of available jobs to maximize the profit, while having sufficient capacity to meet each due date expectation. ^ The nature of capacity planning for concurrent ETO operations is different from other operation modes. The search for an effective solution to this problem has been an emerging research field. This study characterizes the problem of capacity planning and proposes a solution approach to the problem. This mathematical model relates work requirements to capacity over the planning horizon. The methodology is proposed for solving industry-scale problems. Along with the capacity planning methodology, a set of heuristic rules was evaluated for improving concurrent ETO planning. ^
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Fisheries independent data on relatively unstudied nekton communities were used to explore the efficacy of new tools to be applied in the investigation of shallow coastal coral reef habitats. These data obtained through concurrent diver visual and acoustic surveys provided descriptions of spatial community distribution patterns across seasonal temporal scales in a previously undocumented region. Fish density estimates by both diver and acoustic methodologies showed a general agreement in ability to detect distributional patterns across reef tracts, though magnitude of density estimates were different. Fish communities in southeastern Florida showed significant trends in spatial distribution and seasonal abundance, with higher estimates of biomass obtained in the dry season. Further, community composition shifted across reef tracts and seasons as a function of the movements of several key reef species.