916 resultados para Compilers (Computer programs) -- Design
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
Advances in technology have provided new ways of using entertainment and game technology to foster human interaction. Games and playing with games have always been an important part of people’s everyday lives. Traditionally, human-computer interaction (HCI) research was seen as a psychological cognitive science focused on human factors, with engineering sciences as the computer science part of it. Although cognitive science has made significant progress over the past decade, the influence of people’s emotions on design networks is increasingly important, especially when the primary goal is to challenge and entertain users (Norman 2002). Game developers have explored the key issues in game design and identified that the driving force in the success of games is user experience. User-centered design integrates knowledge of users’ activity practices, needs, and preferences into the design process. Geocaching is a location-based treasure hunt game created by a community of players. Players use GPS (Global Position System) technology to find “treasures” and create their own geocaches; the game can be developed when the players invent caches and used more imagination to creations the caches. This doctoral dissertation explores user experience of geocaching and its applications in tourism and education. Globally, based on the Geocaching.com webpage, geocaching has been played about 180 countries and there are more than 10 million registered geocachers worldwide (Geocaching.com, 25.11.2014). This dissertation develops and presents an interaction model called the GameFlow Experience model that can be used to support the design of treasure hunt applications in tourism and education contexts. The GameFlow Model presents and clarifies various experiences; it provides such experiences in a real-life context, offers desirable design targets to be utilized in service design, and offers a perspective to consider when evaluating the success of adventure game concepts. User-centered game designs have adapted to human factor research in mainstream computing science. For many years, the user-centered design approach has been the most important research field in software development. Research has been focusing on user-centered design in software development such as office programs, but the same ideas and theories that will reflect the needs of a user-centered research are now also being applied to game design (Charles et al. 2005.) For several years, we have seen a growing interest in user experience design. Digital games are experience providers, and game developers need tools to better understand the user experience related to products and services they have created. This thesis aims to present what the user experience is in geocaching and treasure hunt games and how it can be used to develop new concepts for the treasure hunt. Engineers, designers, and researchers should have a clear understanding of what user experience is, what its parts are, and most importantly, how we can influence user satisfaction. In addition, we need to understand how users interact with electronic products and people, and how different elements synergize their experiences. This doctoral dissertation represents pioneering work on the user experience of geocaching and treasure hunt games in the context of tourism and education. The research also provides a model for game developers who are planning treasure hunt concepts.
Resumo:
Aim and design: To evaluate family-based health counseling for young children, and to study the significance of adding parental self-care or the training of professionals to the programs. The effectiveness and acceptability of the programs were evaluated by comparing two new programs with an earlier one. Subjects and methods: The study was carried out in Vantaa, which was divided into three study areas. The subjects consisted of children born in 2008, particularly fi rstborn children, while children born in 2006 formed the historical control. The fi rst of the new programs emphasized oral hygiene and use of fl uoride, and the second program focused on proper diet and use of xylitol. The main outcome measure was mutansstreptococci (MS) in the dental biofi lm of two-year-olds, and the opinions of parents and dental professionals were evaluated using questionnaires. Results: The programs found wide acceptance among dental professionals. There were no group-related differences found in the MS scores of the two-year-olds. However, all groups combined, father’s advanced level of education and child’s proper use of xylitol were associated with negative MS scores. In the opinion of parents, the oral healthcare guidance at least somewhat met their expectations. Conclusions: The present fi ndings suggest that providing training and support for professionals in health education is important. The addition of parental self-care to supplement programs aimed at young children does not improve the program, although it may improve parental readiness to change their own health habits. Counseling for families might be best carried out through a routine patient-centered program.
Resumo:
This project aims to design and manufacture a mobile robot with two Universal Robot UR10 mainly used indoors. In order to obtain omni-directional maneuverability, the mobile robot is constructed with Mecanum wheels. The Mecanum wheel can move in any direction with a series of rollers attached to itself. These rollers are angled at 45º about the hub’s circumference. This type of wheels can be used in both driving and steering with their any-direction property. This paper is focused on the design of traction system and suspension system, and the velocity control of Mecanum wheels in the close-loop control system. The mechanical design includes selection of bearing housing, couplers which are act as connection between shafts, motor parts, and other needed components. The 3D design software SolidWorks is utilized to assemble all the components in order to get correct tolerance. The driving shaft is designed based on assembled structure via the software as well. The design of suspension system is to compensate the assembly error of Mecanum wheels to guarantee the stability of the robot. The control system of motor drivers is realized through the Robot Operating System (ROS) on Ubuntu Linux. The purpose of inverse kinematics is to obtain the relationship among the movements of all Mecanum wheels. Via programming and interacting with the computer, the robot could move with required speed and direction.
Resumo:
Global energy consumption has been increasing yearly and a big portion of it is used in rotating electrical machineries. It is clear that in these machines energy should be used efficiently. In this dissertation the aim is to improve the design process of high-speed electrical machines especially from the mechanical engineering perspective in order to achieve more reliable and efficient machines. The design process of high-speed machines is challenging due to high demands and several interactions between different engineering disciplines such as mechanical, electrical and energy engineering. A multidisciplinary design flow chart for a specific type of high-speed machine in which computer simulation is utilized is proposed. In addition to utilizing simulation parallel with the design process, two simulation studies are presented. The first is used to find the limits of two ball bearing models. The second is used to study the improvement of machine load capacity in a compressor application to exceed the limits of current machinery. The proposed flow chart and simulation studies show clearly that improvements in the high-speed machinery design process can be achieved. Engineers designing in high-speed machines can utilize the flow chart and simulation results as a guideline during the design phase to achieve more reliable and efficient machines that use energy efficiently in required different operation conditions.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
Human-Centered Design (HCD) is a well-recognized approach to the design of interactive computing systems that supports everyday and professional lives of people. To that end, the HCD approach put central emphasis on the explicit understanding of users and context of use by involving users throughout the entire design and development process. With mobile computing, the diversity of users as well as the variety in the spatial, temporal, and social settings of the context of use has notably expanded, which affect the effort of interaction designers to understand users and context of use. The emergence of the mobile apps era in 2008 as a result of structural changes in the mobile industry and the profound enhanced capabilities of mobile devices, further intensify the embeddedness of technology in the daily life of people and the challenges that interaction designers face to cost-efficiently understand users and context of use. Supporting interaction designers in this challenge requires understanding of their existing practice, rationality, and work environment. The main objective of this dissertation is to contribute to interaction design theories by generating understanding on the HCD practice of mobile systems in the mobile apps era, as well as to explain the rationality of interaction designers in attending to users and context of use. To achieve that, a literature study is carried out, followed by a mixed-methods research that combines multiple qualitative interview studies and a quantitative questionnaire study. The dissertation contributes new insights regarding the evolving HCD practice at an important time of transition from stationary computing to mobile computing. Firstly, a gap is identified between interaction design as practiced in research and in the industry regarding the involvement of users in context; whereas the utilization of field evaluations, i.e. in real-life environments, has become more common in academic projects, interaction designers in the industry still rely, by large, on lab evaluations. Secondly, the findings indicate on new aspects that can explain this gap and the rationality of interaction designers in the industry in attending to users and context; essentially, the professional-client relationship was found to inhibit the involvement of users, while the mental distance between practitioners and users as well as the perceived innovativeness of the designed system are suggested in explaining the inclination to study users in situ. Thirdly, the research contributes the first explanatory model on the relation between the organizational context and HCD; essentially, innovation-focused organizational strategies greatly affect the cost-effective usage of data on users and context of use. Last, the findings suggest a change in the nature of HCD in the mobile apps era, at least with universal consumer systems; evidently, the central attention on the explicit understanding of users and context of use shifts from an early requirements phase and continual activities during design and development to follow-up activities. That is, the main effort to understand users is by collecting data on their actual usage of the system, either before or after the system is deployed. The findings inform both researchers and practitioners in interaction design. In particular, the dissertation suggest on action research as a useful approach to support interaction designers and further inform theories on interaction design. With regard to the interaction design practice, the dissertation highlights strategies that encourage a more cost-effective user- and context-informed interaction design process. With the continual embeddedness of computing into people’s life, e.g. with wearable devices and connected car systems, the dissertation provides a timely and valuable view on the evolving humancentered design.
Resumo:
Today, the user experience and usability in software application are becoming a major design issue due to the adaptation of many processes using new technologies. Therefore, the study of the user experience and usability might be included in every software development project and, thus, they should be tested to get traceable results. As a result of different testing methods to evaluate the concepts, a non-expert on the topic might have doubts on which option he/she should opt for and how to interpret the outcomes of the process. This work aims to create a process to ease the whole testing methodology based on the process created by Seffah et al. and a supporting software tool to follow the procedure of these testing methods for the user experience and usability.
Resumo:
In 2002, The Ontario Federation of School Athletic Associations (OFSAA) identified that in providing extracurricular sport programs schools are faced with the 'new realities' of the education system. Although research has been conducted exploring the pressures impacting the provision of extracurricular school sport (Donnelly, Mcloy, Petherick, & Safai, 2000), few studies within the field have focused on understanding extracurricular school sport from an organizational level. The focus of this study was to examine the organizational design (structure, systems, and values) of the extracurricular sport department within three Ontario high schools, as well as to understand the context within which the departments exist. A qualitative multiple case study design was adopted and three public high schools were selected from one district school board in Ontario to represent the cases under investigation. Interviews, observations and documents were used to analyze the extracurricular sport department design of each case and to better understand the context within which the departments exist. As the result of the analysis of the structure, systems and values of each case, two designs emerged- Design KT1 and Design KT2. Differences in the characteristics of design archetype KT1 and KT2 centered on the design dimension of values, and therefore this study identified that contrasting organizational values reflect differences in design types. The characteristics of the Kitchen Table archetype were found to be transferable to the sub-sector of extracurricular school sport, and therefore this research provides a springboard for further research in organizational design within the education sector of extracurricular high school sport. Interconnections were found between the data associated with the external and internal contexts within which the extracurricular sport departments exist. The analysis of the internal context indicated the important role played by organizational members in shaping the context within which the departments exist. The analysis of the external context highlighted the institutional pressures that were present within the education environment. Both political and cultural expectations related to the role of extracurricular sport within schools were visible and were subsequently used by the high schools to create legitimacy and prestige, and to access resources.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
Resumo:
The "Java Intelligent Tutoring System" (JITS) research project focused on designing, constructing, and determining the effectiveness of an Intelligent Tutoring System for beginner Java programming students at the postsecondary level. The participants in this research were students in the School of Applied Computing and Engineering Sciences at Sheridan College. This research involved consistently gathering input from students and instructors using JITS as it developed. The cyclic process involving designing, developing, testing, and refinement was used for the construction of JITS to ensure that it adequately meets the needs of students and instructors. The second objective in this dissertation determined the effectiveness of learning within this environment. The main findings indicate that JITS is a richly interactive ITS that engages students on Java programming problems. JITS is equipped with a sophisticated personalized feedback mechanism that models and supports each student in his/her learning style. The assessment component involved 2 main quantitative experiments to determine the effectiveness of JITS in terms of student performance. In both experiments it was determined that a statistically significant difference was achieved between the control group and the experimental group (i.e., JITS group). The main effect for Test (i.e., pre- and postiest), F( l , 35) == 119.43,p < .001, was qualified by a Test by Group interaction, F( l , 35) == 4.98,p < .05, and a Test by Time interaction, F( l , 35) == 43.82, p < .001. Similar findings were found for the second experiment; Test by Group interaction revealed F( 1 , 92) == 5.36, p < .025. In both experiments the JITS groups outperformed the corresponding control groups at posttest.
Resumo:
This paper presents two studies, both examining the efficacy of a computer programme (Captain's Log) in training attentional skills. The population of interest is the traumatically brain injured. Study #1 is a single-case design that offers recommendations for the second, .larger (N=5) inquiry. Study #2 is an eight-week hierarchical treatment programme with a multi-based testing component. Attention, memory, listening comprehension, locus-of-control, self-esteem, visuo-spatial, and general outcome measures are employed within the testing schedule. Results suggest that any improvement was a result of practice effects. With a few single-case exceptions, the participants showed little improvement in the dependent measures.