960 resultados para Computer software -- Development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advances in technology have provided new ways of using entertainment and game technology to foster human interaction. Games and playing with games have always been an important part of peoples everyday lives. Traditionally, human-computer interaction (HCI) research was seen as a psychological cognitive science focused on human factors, with engineering sciences as the computer science part of it. Although cognitive science has made significant progress over the past decade, the influence of peoples emotions on design networks is increasingly important, especially when the primary goal is to challenge and entertain users (Norman 2002). Game developers have explored the key issues in game design and identified that the driving force in the success of games is user experience. User-centered design integrates knowledge of users activity practices, needs, and preferences into the design process. Geocaching is a location-based treasure hunt game created by a community of players. Players use GPS (Global Position System) technology to find treasures and create their own geocaches; the game can be developed when the players invent caches and used more imagination to creations the caches. This doctoral dissertation explores user experience of geocaching and its applications in tourism and education. Globally, based on the Geocaching.com webpage, geocaching has been played about 180 countries and there are more than 10 million registered geocachers worldwide (Geocaching.com, 25.11.2014). This dissertation develops and presents an interaction model called the GameFlow Experience model that can be used to support the design of treasure hunt applications in tourism and education contexts. The GameFlow Model presents and clarifies various experiences; it provides such experiences in a real-life context, offers desirable design targets to be utilized in service design, and offers a perspective to consider when evaluating the success of adventure game concepts. User-centered game designs have adapted to human factor research in mainstream computing science. For many years, the user-centered design approach has been the most important research field in software development. Research has been focusing on user-centered design in software development such as office programs, but the same ideas and theories that will reflect the needs of a user-centered research are now also being applied to game design (Charles et al. 2005.) For several years, we have seen a growing interest in user experience design. Digital games are experience providers, and game developers need tools to better understand the user experience related to products and services they have created. This thesis aims to present what the user experience is in geocaching and treasure hunt games and how it can be used to develop new concepts for the treasure hunt. Engineers, designers, and researchers should have a clear understanding of what user experience is, what its parts are, and most importantly, how we can influence user satisfaction. In addition, we need to understand how users interact with electronic products and people, and how different elements synergize their experiences. This doctoral dissertation represents pioneering work on the user experience of geocaching and treasure hunt games in the context of tourism and education. The research also provides a model for game developers who are planning treasure hunt concepts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about todays software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about todays software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigated the effectiveness of a computer program, PERSONAL CAREER DIRECTIONS (PC DIRECTIONS) (Anderson, Welborn, & Wright, 1983) on career planning and exploration for twenty-four Brock University students (18 women and 6 men) who requested career planning assistance at the Career/Placement Services of the Counselling Centre. A one-group pretest/posttest design was used in the study_ Progress in career planning and exploration was measured by Career Planning (CP) and Career Exploration (CE) scales of the Career Development Inventory (College and University Form) (Super, Thompson, Lindeman, Jordaan, & Myers, 1981). A paired samples 2-tailed t test for Career Development Attitudes (CDA) , the combined CP and CE scales, revealed the posttest scores were significantly higher than the pretest scores, t(23) = 3.74, 2 < .001. Student progress was also assessed by self-report lists of job titles which reflected positive changes after students used PC DIRECTIONS. In response to several questions, students' attitudes were more positive than negative toward the program. Implications are that PC DIRECTIONS is an effective component in promoting career planning for university students. Further studies may reveal that different types of students may benefit from different interventions in the career planning process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le dveloppement du logiciel actuel doit faire face de plus en plus la complexit de programmes gigantesques, labors et maintenus par de grandes quipes rparties dans divers lieux. Dans ses tches rgulires, chaque intervenant peut avoir rpondre des questions varies en tirant des informations de sources diverses. Pour amliorer le rendement global du dveloppement, nous proposons d'intgrer dans un IDE populaire (Eclipse) notre nouvel outil de visualisation (VERSO) qui calcule, organise, affiche et permet de naviguer dans les informations de faon cohrente, efficace et intuitive, afin de bnficier du systme visuel humain dans l'exploration de donnes varies. Nous proposons une structuration des informations selon trois axes : (1) le contexte (qualit, contrle de version, bogues, etc.) dtermine le type des informations ; (2) le niveau de granularit (ligne de code, mthode, classe, paquetage) drive les informations au niveau de dtails adquat ; et (3) l'volution extrait les informations de la version du logiciel dsire. Chaque vue du logiciel correspond une coordonne discrte selon ces trois axes, et nous portons une attention toute particulire la cohrence en naviguant entre des vues adjacentes seulement, et ce, afin de diminuer la charge cognitive de recherches pour rpondre aux questions des utilisateurs. Deux expriences valident l'intrt de notre approche intgre dans des tches reprsentatives. Elles permettent de croire qu'un accs diverses informations prsentes de faon graphique et cohrente devrait grandement aider le dveloppement du logiciel contemporain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un objectif principal du gnie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie oriente objet (OO) a fourni de bons concepts et des techniques de modlisation et de programmation qui ont permis de dvelopper des applications complexes tant dans le monde acadmique que dans le monde industriel. Cette exprience a cependant permis de dcouvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problme de traabilit). La programmation oriente aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problme des proccupations transversales. Ces proccupations transversales se traduisent par la dispersion du mme code dans plusieurs modules du systme ou lemmlement de plusieurs morceaux de code dans un mme module. Cette nouvelle mthode de programmer permet dimplmenter chaque problmatique indpendamment des autres, puis de les assembler selon des rgles bien dfinies. La programmation OA promet donc une meilleure productivit, une meilleure rutilisation du code et une meilleure adaptation du code aux changements. Trs vite, cette nouvelle faon de faire sest vue stendre sur tout le processus de dveloppement de logiciel en ayant pour but de prserver la modularit et la traabilit, qui sont deux proprits importantes des logiciels de bonne qualit. Cependant, la technologie OA prsente de nombreux dfis. Le raisonnement, la spcification, et la vrification des programmes OA prsentent des difficults dautant plus que ces programmes voluent dans le temps. Par consquent, le raisonnement modulaire de ces programmes est requis sinon ils ncessiteraient dtre rexamins au complet chaque fois quun composant est chang ou ajout. Il est cependant bien connu dans la littrature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqus changent souvent le comportement de leurs composantes de base [47]. Ces mmes difficults sont prsentes au niveau des phases de spcification et de vrification du processus de dveloppement des logiciels. Au meilleur de nos connaissances, la spcification modulaire et la vrification modulaire sont faiblement couvertes et constituent un champ de recherche trs intressant. De mme, les interactions entre aspects est un srieux problme dans la communaut des aspects. Pour faire face ces problmes, nous avons choisi dutiliser la thorie des catgories et les techniques des spcifications algbriques. Pour apporter une solution aux problmes ci-dessus cits, nous avons utilis les travaux de Wiels [110] et dautres contributions telles que celles dcrites dans le livre [25]. Nous supposons que le systme en dveloppement est dj dcompos en aspects et classes. La premire contribution de notre thse est lextension des techniques des spcifications algbriques la notion daspect. Deuximement, nous avons dfini une logique, LA , qui est utilise dans le corps des spcifications pour dcrire le comportement de ces composantes. La troisime contribution consiste en la dfinition de loprateur de tissage qui correspond la relation dinterconnexion entre les modules daspect et les modules de classe. La quatrime contribution concerne le dveloppement dun mcanisme de prvention qui permet de prvenir les interactions indsirables dans les systmes orients aspect.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les changements sont faits de faon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux dfauts de code et de conception. Les dfauts de conception sont des mauvaises solutions des problmes rcurrents de conception ou dimplmentation, gnralement dans le dveloppement orient objet. Au cours des activits de comprhension et de changement et en raison du temps daccs au march, du manque de comprhension, et de leur exprience, les dveloppeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par consquent, ils introduisent des dfauts de conception dans leurs systmes. Dans la littrature, plusieurs auteurs ont fait valoir que les dfauts de conception rendent les systmes orients objet plus difficile comprendre, plus sujets aux fautes, et plus difficiles changer que les systmes sans les dfauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une tude empirique sur limpact des dfauts de conception sur la comprhension et aucun dentre eux na tudi limpact des dfauts de conception sur leffort des dveloppeurs pour corriger les fautes. Dans cette thse, nous proposons trois principales contributions. La premire contribution est une tude empirique pour apporter des preuves de limpact des dfauts de conception sur la comprhension et le changement. Nous concevons et effectuons deux expriences avec 59 sujets, afin dvaluer limpact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des dveloppeurs effectuant des tches de comprhension et de changement. Nous mesurons la performance des dveloppeurs en utilisant: (1) lindice de charge de travail de la NASA pour leurs efforts, (2) le temps quils ont pass dans laccomplissement de leurs tches, et (3) les pourcentages de bonnes rponses. Les rsultats des deux expriences ont montr que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des dveloppeurs lors de tches de comprhension et de changement. Les rsultats obtenus justifient les recherches antrieures sur la spcification et la dtection des dfauts de conception. Les quipes de dveloppement de logiciels doivent mettre en garde les dveloppeurs contre le nombre lev doccurrences de dfauts de conception et recommander des refactorisations chaque tape du processus de dveloppement pour supprimer ces dfauts de conception quand cest possible. Dans la deuxime contribution, nous tudions la relation entre les dfauts de conception et les fautes. Nous tudions limpact de la prsence des dfauts de conception sur leffort ncessaire pour corriger les fautes. Nous mesurons leffort pour corriger les fautes laide de trois indicateurs: (1) la dure de la priode de correction, (2) le nombre de champs et mthodes touchs par la correction des fautes et (3) lentropie des corrections de fautes dans le code-source. Nous menons une tude empirique avec 12 dfauts de conception dtects dans 54 versions de quatre systmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos rsultats ont montr que la dure de la priode de correction est plus longue pour les fautes impliquant des classes avec des dfauts de conception. En outre, la correction des fautes dans les classes avec des dfauts de conception fait changer plus de fichiers, plus les champs et des mthodes. Nous avons galement observ que, aprs la correction dune faute, le nombre doccurrences de dfauts de conception dans les classes impliques dans la correction de la faute diminue. Comprendre limpact des dfauts de conception sur leffort des dveloppeurs pour corriger les fautes est important afin daider les quipes de dveloppement pour mieux valuer et prvoir limpact de leurs dcisions de conception et donc canaliser leurs efforts pour amliorer la qualit de leurs systmes. Les quipes de dveloppement doivent contrler et supprimer les dfauts de conception de leurs systmes car ils sont susceptibles daugmenter les efforts de changement. La troisime contribution concerne la dtection des dfauts de conception. Pendant les activits de maintenance, il est important de disposer dun outil capable de dtecter les dfauts de conception de faon incrmentale et itrative. Ce processus de dtection incrmentale et itrative pourrait rduire les cots, les efforts et les ressources en permettant aux praticiens didentifier et de prendre en compte les occurrences de dfauts de conception comme ils les trouvent lors de la comprhension et des changements. Les chercheurs ont propos des approches pour dtecter les occurrences de dfauts de conception, mais ces approches ont actuellement quatre limites: (1) elles ncessitent une connaissance approfondie des dfauts de conception, (2) elles ont une prcision et un rappel limits, (3) elles ne sont pas itratives et incrmentales et (4) elles ne peuvent pas tre appliques sur des sous-ensembles de systmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour dtecter les dfauts de conception, bas sur une technique dapprentissage automatique machines vecteur de support et prenant en compte les retours des praticiens. Grce une tude empirique portant sur trois systmes et quatre dfauts de conception, nous avons montr que la prcision et le rappel de SMURF sont suprieurs ceux de DETEX et BDTEX lors de la dtection des occurrences de dfauts de conception. Nous avons galement montr que SMURF peut tre appliqu la fois dans les configurations intra-systme et inter-systme. Enfin, nous avons montr que la prcision et le rappel de SMURF sont amliors quand on prend en compte les retours des praticiens.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lingnierie dirige par les modles (IDM) est un paradigme dingnierie du logiciel bien tabli, qui prconise lutilisation de modles comme artfacts de premier ordre dans les activits de dveloppement et de maintenance du logiciel. La manipulation de plusieurs modles durant le cycle de vie du logiciel motive lusage de transformations de modles (TM) afin dautomatiser les oprations de gnration et de mise jour des modles lorsque cela est possible. Lcriture de transformations de modles demeure cependant une tche ardue, qui requiert la fois beaucoup de connaissances et defforts, remettant ainsi en question les avantages apports par lIDM. Afin de faire face cette problmatique, de nombreux travaux de recherche se sont intresss lautomatisation des TM. Lapprentissage de transformations de modles par lexemple (TMPE) constitue, cet gard, une approche prometteuse. La TMPE a pour objectif dapprendre des programmes de transformation de modles partir dun ensemble de paires de modles sources et cibles fournis en guise dexemples. Dans ce travail, nous proposons un processus dapprentissage de transformations de modles par lexemple. Ce dernier vise apprendre des transformations de modles complexes en sattaquant trois exigences constates, savoir, lexploration du contexte dans le modle source, la vrification de valeurs dattributs sources et la drivation dattributs cibles complexes. Nous validons notre approche de manire exprimentale sur 7 cas de transformations de modles. Trois des sept transformations apprises permettent dobtenir des modles cibles parfaits. De plus, une prcision et un rappel suprieurs 90% sont enregistrs au niveau des modles cibles obtenus par les quatre transformations restantes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La rvision du code est un procd essentiel quelque soit la maturit d'un projet; elle cherche valuer la contribution apporte par le code soumis par les dveloppeurs. En principe, la rvision du code amliore la qualit des changements de code (patches) avant qu'ils ne soient valids dans le repertoire matre du projet. En pratique, l'excution de ce procd n'exclu pas la possibilit que certains bugs passent inaperus. Dans ce document, nous prsentons une tude empirique enqutant la rvision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualit de telles inspections.Premirement, nous relatons une tude quantitative dans laquelle nous utilisons l'algorithme SSZ pour dtecter les modifications et les changements de code favorisant la cration de bogues (bug-inducing changes) que nous avons li avec l'information contenue dans les rvisions de code (code review information) extraites du systme de traage des erreurs (issue tracking system). Nous avons dcouvert que les raisons pour lesquelles les rviseurs manquent certains bogues tait corrles autant leurs caractristiques personnelles qu'aux proprits techniques des corrections en cours de revue. Ensuite, nous relatons une tude qualitative invitant les dveloppeurs de chez Mozilla nous donner leur opinion concernant les attributs favorables la bonne formulation d'une rvision de code. Les rsultats de notre sondage suggrent que les dveloppeurs considrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractristiques personnelles (l'exprience et review queue) comme des facteurs influant fortement la qualit des revues de code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study was motivated by the need to understand factors that guide the software exports and competitiveness, both positively and negatively. The influence of one factor or another upon the export competitiveness is to be understood in great depth, which is necessary to find out the industrys sustainability. India is being emulated as an example for the success strategy in software development and exports. Indias software industry is hailed as one of the globally competitive software industry in the world. The major objectives are to model the growth pattern of exports and domestic sales of software and services of India and to find out the factors influencing the growth pattern of software industry in India. The thesis compare the growth pattern of software industry of India with respect to that of Ireland and Israel and to critically of various problems faced by software industry and export in India and to model the variables of competitiveness of emerging software producing nations

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.