876 resultados para Computer software -- Development
Resumo:
Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.
Resumo:
Stigmergy is a biological term originally used when discussing insect or swarm behaviour, and describes a model supporting environment-based communication separating artefacts from agents. This phenomenon is demonstrated in the behavior of ants and their food foraging supported by pheromone trails, or similarly termites and their termite nest building process. What is interesting with this mechanism is that highly organized societies are formed without an apparent central management function. We see design features in Web sites that mimic stigmergic mechanisms as part of the User Interface and we have created generalizations of these patterns. Software development and Web site development techniques have evolved significantly over the past 20 years. Recent progress in this area proposes languages to model web applications to facilitate the nuances specific to these developments. These modeling languages provide a suitable framework for building reusable components encapsulating our design patterns of stigmergy. We hypothesize that incorporating stigmergy as a separate feature of a site’s primary function will ultimately lead to enhanced user coordination.
Resumo:
This paper explores the renewed interest in the creative economy as a possible development pathway for developing nations. Noting the extent to which discussions of creative industries frequently merge into the concept of a creative economy, the paper considers the institutional and public policy settings required to capture economic value associated with creative practice. It is also argued that knowledge economy and creative economy discourses are increasingly merging, particularly in their focus upon design, innovation, software development and convergent media. The paper draws attention to ambiguities in policy discourse, particularly in relation to copyright and intellectual property.
Resumo:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
Resumo:
User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of ‘cyberphysical’ programming, and considers the type of visual annotations which may be helpful in this programming context.
Resumo:
Many games now on the market come with a Software Development Kit, or SDK, which allow players to construct their own worlds and mod(ify) the original. One or two of these mods have achieved notoriety in the press, cited as evidence of malicious intent on the part of the modders who often exploit their own known lived experience as a basis for new virtual playgrounds. But most player constructed games are a source of delight and pleasure for the builder and for the community of players. Creating a game is the act of creating a world, of making a place.
Resumo:
Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.
Resumo:
In the current market, extensive software development is taking place and the software industry is thriving. Major software giants have stated source code theft as a major threat to revenues. By inserting an identity-establishing watermark in the source code, a company can prove it's ownership over the source code. In this paper, we propose a watermarking scheme for C/C++ source codes by exploiting the language restrictions. If a function calls another function, the latter needs to be defined in the code before the former, unless one uses function pre-declarations. We embed the watermark in the code by imposing an ordering on the mutually independent functions by introducing bogus dependency. Removal of dependency by the attacker to erase the watermark requires extensive manual intervention thereby making the attack infeasible. The scheme is also secure against subtractive and additive attacks. Using our watermarking scheme, an n-bit watermark can be embedded in a program having n independent functions. The scheme is implemented on several sample codes and performance changes are analyzed.
Resumo:
Social media tools are starting to become mainstream and those working in the software development industry are often ahead of the game in terms of using current technological innovations to improve their work. With the advent of outsourcing and distributed teams the software industry is ideally placed to take advantage of social media technologies, tools and environments. This paper looks at how social media is being used by early adopters within the software development industry. Current tools and trends in social media tool use are described and critiqued: what works and what doesn't. We use industrial case studies from platform development, commercial application development and government contexts which provide a clear picture of the emergent state of the art. These real world experiences are then used to show how working collaboratively in geographically dispersed teams, enabled by social media, can enhance and improve the development experience.
Resumo:
Customer Relationship Management (CRM) packaged software has become a key contributor to attempts at aligning business and IT strategies in recent years. Throughout the 1990s there was, in many organisations strategies, a shift from the need to manage transactions and toward relationship management. Where Enterprise Resource Planning packages dominated the management of transactions era, CRM packages lead in regard to relationships. At present, balanced views of CRM packages are scantly presented instead relying on vendor rhetoric. This paper uses case study research to analyse some of the issues associated with CRM packages. These issues include the limitations of CRM packages, the need for a relationship orientation and the problems of a dominant management perspective of CRM. It is suggested that these issues could be more readily accommodated by organisational detachment from beliefs in IT as utopia, consideration of prior IS theory and practice and a more informed approach to CRM package selection.
Resumo:
The majority of sugar mill locomotives are equipped with GPS devices from which locomotive position data is stored. Locomotive run information (e.g. start times, run destinations and activities) is electronically stored in software called TOTools. The latest software development allows TOTools to interpret historical GPS information by combining this data with run information recorded in TOTools and geographic information from a GIS application called MapInfo. As a result, TOTools is capable of summarising run activity details such as run start and finish times and shunt activities with great accuracy. This paper presents 15 reports developed to summarise run activities and speed information. The reports will be of use pre-season to assist in developing the next year's schedule and for determining priorities for investment in the track infrastructure. They will also be of benefit during the season to closely monitor locomotive run performance against the existing schedule.
Resumo:
The control of environmental factors in open-office environments, such as lighting and temperature is becoming increasingly automated. This development means that office inhabitants are losing the ability to manually adjust environmental conditions according to their needs. In this paper we describe the design, use and evaluation of MiniOrb, a system that employs ambient and tangible interaction mechanisms to allow inhabitants of office environments to maintain awareness of environmental factors, report on their own subjectively perceived office comfort levels and see how these compare to group average preferences. The system is complemented by a mobile application, which enables users to see and set the same sensor values and preferences, but using a screen-based interface. We give an account of the system’s design and outline the results of an in-situ trial and user study. Our results show that devices that combine ambient and tangible interaction approaches are well suited to the task of recording indoor climate preferences and afford a rich set of possible interactions that can complement those enabled by more conventional screen-based interfaces.
Resumo:
The objective of this study was to find factors that could predict educational dropout. Dropout risk was assessed against pupil’s cognitive competence, success in school, and personal beliefs regarding self and parents, while taking into account the pupil’s background and gender. Based on earlier research, an assumption was made that a pupil’s gender, success in school, and parent’s education would be related with dropping out. This study is part of a project funded by the Academy of Finland and led by Professor Jarkko Hautamäki. The project aims to use longitudinal study to assess the development of pupils’ skills in learning to learn. The target group of this study consisted all Finnish speaking ninth graders of a municipality in Southern Finland. There were in total 1534 pupils, of which 809 were girls and 725 boys. The assessment of learning to learn skills was performed about ninth graders in spring 2004. “Opiopi” test material was used in the assessment, consisting of cognitive tests and questions measuring beliefs. At the same time, pupils’ background information was collected together with their self-reported average grade of all school subjects. During spring 2009, the pupils’ joint application data from years 2004 and 2005 was collected from the Finnish joint application registers. The data were analyzed using quantitative methods assisted by the SPSS for Windows computer software. Analysis was conducted through statistical indices, differences in grade averages, multilevel model, multivariate analysis of variance, and logistic regression analysis. Based on earlier research, dropouts were defined as pupils that had not been admitted to or had not applied to second degree education under the joint application system. Using this definition, 157 students in the target group were classified as dropouts (10 % of the target group): 88 girls and 69 boys. The study showed that the school does not affect the drop-out risk but the school class explains 7,5 % of variation in dropout risk. Among girls, dropping out is predicted by a poor average grade, a lack of beliefs supporting learning, and an unrealistic primary choice in joint application system compared to one’s success in school. Among boys, a poor average grade, unrealistic choices in joint application system, and the belief of parent’s low appreciation of education were related to dropout risk. Keywords educational exclusion, school dropout, success in school, comprehensive school, learning to learn
Resumo:
Models are abstractions of reality that have predetermined limits (often not consciously thought through) on what problem domains the models can be used to explore. These limits are determined by the range of observed data used to construct and validate the model. However, it is important to remember that operating the model beyond these limits, one of the reasons for building the model in the first place, potentially brings unwanted behaviour and thus reduces the usefulness of the model. Our experience with the Agricultural Production Systems Simulator (APSIM), a farming systems model, has led us to adapt techniques from the disciplines of modelling and software development to create a model development process. This process is simple, easy to follow, and brings a much higher level of stability to the development effort, which then delivers a much more useful model. A major part of the process relies on having a range of detailed model tests (unit, simulation, sensibility, validation) that exercise a model at various levels (sub-model, model and simulation). To underline the usefulness of testing, we examine several case studies where simulated output can be compared with simple relationships. For example, output is compared with crop water use efficiency relationships gleaned from the literature to check that the model reproduces the expected function. Similarly, another case study attempts to reproduce generalised hydrological relationships found in the literature. This paper then describes a simple model development process (using version control, automated testing and differencing tools), that will enhance the reliability and usefulness of a model.
Resumo:
The project renewed the Breedcow and Dynama software making it compatible with modern computer operating systems and platforms. Enhancements were also made to the linkages between the individual programs and their operation. The suite of programs is a critical component of the skill set required to make soundly based plans and production choices in the north Australian beef industry.