875 resultados para Web-Based Application


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Agile methods have become increasingly popular in the field of software engineering. While agile methods are now generally considered applicable to software projects of many different kinds, they have not been widely adopted in embedded systems development. This is partly due to the natural constraints that are present in embedded systems development (e.g. hardware–software interdependencies) that challenge the utilization of agile values, principles and practices. The research in agile embedded systems development has been very limited, and this thesis tackles an even less researched theme related to it: the suitability of different project management tools in agile embedded systems development. The thesis covers the basic aspects of many different agile tool types from physical tools, such as task boards and cards, to web-based agile tools that offer all-round solutions for application lifecycle management. In addition to these two extremities, there is also a wide range of lighter agile tools that focus on the core agile practices, such as backlog management. Also other non-agile tools, such as bug trackers, can be used to support agile development, for instance, with plug-ins. To investigate the special tool requirements in agile embedded development, the author observed tool related issues and solutions in a case study involving three different companies operating in the field of embedded systems development. All three companies had a distinct situation in the beginning of the case and thus the tool solutions varied from a backlog spreadsheet built from scratch to plug-in development for an already existing agile software tool. Detailed reports are presented of all three tool cases. Based on the knowledge gathered from agile tools and the case study experiences, it is concluded that there are tool related issues in the pilot phase, such as backlog management and user motivation. These can be overcome in various ways epending on the type of a team in question. Finally, five principles are formed to give guidelines for tool selection and usage in agile embedded systems development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kuluttajat käyttävät sisältöpohjaisia digitaalisia palveluita jatkuvasti saadakseen lisää tietoa terveydestään. Samalla he arvioivat käyttämiensä palveluiden laatua. Jotta yritykset voisivat suunnitella ja tarjota parhaita mahdollisia digitaalisia palveluita kuluttajille, yritysten tulisi tunnistaa ja analysoida kuluttajien kokemuksia ja käyttötarkoituksia heidän palveluissaan. Tämän tutkimuksen tarkoituksena on kuvailla kuluttajien näkemyksiä Masennusinfo.fi:stä, joka on sisältöpohjainen digitaalinen palvelu, ja joka tarjoaa käyttäjilleen tietoa masennuksesta. Päämääränä on selvittää, kuinka kuluttajat kokevat lääkeyrityksen tarjoaman palvelun laadun ja mihin tarkoituksiin sitä käytetään. Tutkimuksen tarkoitus voidaan jakaa kolmeen osa-ongelmaan: Mihin tarkoituksiin kuluttajat käyttävät sisältöpohjaisia digitaalisia palveluita? Miten kuluttajat kokevat näiden palveluiden laadun? Kuinka käyttötarkoitus ja koettu laatu eroavat eri käyttäjäryhmissä? Tutkimus toteutetaan web-pohjaisella kyselytutkimuksella. Mittarit tehdään teoreettisen viitekehyksen pohjalta, joka perustuu aikaisempaan tutkimukseen. Tutkimuksen empiirinen osuus suoritetaan pop-up tutkimuksella, joka sijoitetaan tutkittavalle sivustolle antaen näin kaikille palvelun käyttäjille mahdollisuuden vastata kyselyyn. Tulokset osoittavat, että palvelua käyttävät suurimmaksi osaksi naiset, suhteellisen nuoret 16−29- vuotiaat, tai yli keski-ikäiset 50−65-vuotiaat henkilöt, jotka ovat joko työssäkäyviä tai opiskelijoita ja korkeasti koulutettuja. Masennusinfo.fi nähdään laadukkaana palveluna kaikissa käyttäjäryhmissä sekä sen käytettävyyden että sisällön perusteella. Käyttötarkoituksetkin ovat jokseenkin samankaltaisia eri käyttäjäryhmissä. Yleensä palvelua käytetään tiedon hakemiseen sairauden alkuvaiheessa. Löydöksien perusteella esitetään, että palvelua muokataan vastaamaan yhä paremmin sen käyttötarkoituksia ja tyypillistä käyttäjäprofiilia. Koska muutamia pieniä eroja käyttäjäryhmien näkemyksissä havaittiin, palveluiden tuottaja päättää, minkä ryhmän mieltymyksiä se noudattaa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Production of a new system in any range is expanding dramatically and new ideas are there upon introduced, the logic stands behind the matter is the growth of application of the internet and granting web-based systems. Before producing a system and distribute to the customer, various aspects should be studied which multiple the profit of the system. The process of productizing a new system from being unprocessed idea until delivers to the final user has been unambiguous. In this thesis, the systematize service in a way that benefits both the customer and provider, along with an effort to establish trust and diminish customer’s risk and increase service productivity are in detail presented. Characteristics of Servitization and Productization as two faces of one coin have been interpreted. Apart from the abovementioned issues state of art, service-oriented architecture (SOA) and New Service Development (NSD) has been included in this report for solving the problem of gradually decline in value of companies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dans ce travail, nous explorons la faisabilité de doter les machines de la capacité de prédire, dans un contexte d'interaction homme-machine (IHM), l'émotion d'un utilisateur, ainsi que son intensité, de manière instantanée pour une grande variété de situations. Plus spécifiquement, une application a été développée, appelée machine émotionnelle, capable de «comprendre» la signification d'une situation en se basant sur le modèle théorique d'évaluation de l'émotion Ortony, Clore et Collins (OCC). Cette machine est apte, également, à prédire les réactions émotionnelles des utilisateurs, en combinant des versions améliorées des k plus proches voisins et des réseaux de neurones. Une procédure empirique a été réalisée pour l'acquisition des données. Ces dernières ont fourni une connaissance consistante aux algorithmes d'apprentissage choisis et ont permis de tester la performance de la machine. Les résultats obtenus montrent que la machine émotionnelle proposée est capable de produire de bonnes prédictions. Une telle réalisation pourrait encourager son utilisation future dans des domaines exploitant la reconnaissance automatique de l'émotion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un système, décrit avec un grand nombre d'éléments fortement interdépendants, est complexe, difficile à comprendre et à maintenir. Ainsi, une application orientée objet est souvent complexe, car elle contient des centaines de classes avec de nombreuses dépendances plus ou moins explicites. Une même application, utilisant le paradigme composant, contiendrait un plus petit nombre d'éléments, faiblement couplés entre eux et avec des interdépendances clairement définies. Ceci est dû au fait que le paradigme composant fournit une bonne représentation de haut niveau des systèmes complexes. Ainsi, ce paradigme peut être utilisé comme "espace de projection" des systèmes orientés objets. Une telle projection peut faciliter l'étape de compréhension d'un système, un pré-requis nécessaire avant toute activité de maintenance et/ou d'évolution. De plus, il est possible d'utiliser cette représentation, comme un modèle pour effectuer une restructuration complète d'une application orientée objets opérationnelle vers une application équivalente à base de composants tout aussi opérationnelle. Ainsi, La nouvelle application bénéficiant ainsi, de toutes les bonnes propriétés associées au paradigme composants. L'objectif de ma thèse est de proposer une méthode semi-automatique pour identifier une architecture à base de composants dans une application orientée objets. Cette architecture doit, non seulement aider à la compréhension de l'application originale, mais aussi simplifier la projection de cette dernière dans un modèle concret de composant. L'identification d'une architecture à base de composants est réalisée en trois grandes étapes: i) obtention des données nécessaires au processus d'identification. Elles correspondent aux dépendances entre les classes et sont obtenues avec une analyse dynamique de l'application cible. ii) identification des composants. Trois méthodes ont été explorées. La première utilise un treillis de Galois, la seconde deux méta-heuristiques et la dernière une méta-heuristique multi-objective. iii) identification de l'architecture à base de composants de l'application cible. Cela est fait en identifiant les interfaces requises et fournis pour chaque composant. Afin de valider ce processus d'identification, ainsi que les différents choix faits durant son développement, j'ai réalisé différentes études de cas. Enfin, je montre la faisabilité de la projection de l'architecture à base de composants identifiée vers un modèle concret de composants.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Avec la montée en popularité d’Internet et des médias sociaux, de plus en plus d’organismes sociaux et publics, notamment, intègrent des plateformes Web à leurs volets traditionnels. La question d’Internet demeure toutefois peu étudiée eu égard à la publicité sociale. Ce mémoire porte donc sur la question du Web en relation avec les campagnes sociales adressées aux jeunes Québécois de 18 à 25 ans, une population particulièrement réceptive aux nouvelles technologies. Plus exactement, dans cette étude, nous avons analysé trois sites Web rattachés à des campagnes sociales (La vitesse, ça coûte cher de la SAAQ, Les ITSS se propagent du MSSS et 50 000 adeptes, 5 000 toutous de la Fondation CHU Sainte-Justine) dans l’objectif de déterminer leurs forces et leurs faiblesses pour ensuite proposer des pistes pour leur optimisation. C’est à l’aide d’une analyse critique de contenu suivie d’entrevues et d’observations individuelles auprès de 19 participants que nous sommes parvenue à suggérer des pistes pour l’optimisation des sites Web de campagnes sociales destinées aux jeunes adultes québécois. Une des plus grandes difficultés en ce qui a trait à leur conception consiste à choisir les stratégies les plus appropriées pour provoquer un changement d’attitude ou de comportement, a fortiori chez ceux qui adoptent des comportements à risque (fumer, conduire en état d’ébriété, avoir des relations sexuelles non protégées); des stratégies qui, pour être plus efficaces, devraient être adaptées en fonction des caractéristiques propres aux publics cibles et aux médias de diffusion. Afin d’analyser adéquatement les campagnes sociales, nous avons fait appel aux théories de la persuasion et aux théories sur l’influence des médias jugées pertinentes dans notre contexte puisqu’elles sont propres à ce type d’étude. Ces approches combinées nous ont permis d’intégrer à l’analyse d’une campagne donnée les contextes qui l’entourent et les pratiques dans lesquelles elle s’inscrit. Cette étude nous a, entre autres, permis de démontrer qu’il existait d’importants écarts entre les attentes et les besoins des internautes et l’offre des sites Web étudiés.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Information and communication technologies are the tools that underpin the emerging “Knowledge Society”. Exchange of information or knowledge between people and through networks of people has always taken place. But the ICT has radically changed the magnitude of this exchange, and thus factors such as timeliness of information and information dissemination patterns have become more important than ever.Since information and knowledge are so vital for the all round human development, libraries and institutions that manage these resources are indeed invaluable. So, the Library and Information Centres have a key role in the acquisition, processing, preservation and dissemination of information and knowledge. ln the modern context, library is providing service based on different types of documents such as manuscripts, printed, digital, etc. At the same time, acquisition, access, process, service etc. of these resources have become complicated now than ever before. The lCT made instrumental to extend libraries beyond the physical walls of a building and providing assistance in navigating and analyzing tremendous amounts of knowledge with a variety of digital tools. Thus, modern libraries are increasingly being re-defined as places to get unrestricted access to information in many formats and from many sources.The research was conducted in the university libraries in Kerala State, India. lt was identified that even though the information resources are flooding world over and several technologies have emerged to manage the situation for providing effective services to its clientele, most of the university libraries in Kerala were unable to exploit these technologies at maximum level. Though the libraries have automated many of their functions, wide gap prevails between the possible services and provided services. There are many good examples world over in the application of lCTs in libraries for the maximization of services and many such libraries have adopted the principles of reengineering and re-defining as a management strategy. Hence this study was targeted to look into how effectively adopted the modern lCTs in our libraries for maximizing the efficiency of operations and services and whether the principles of re-engineering and- redefining can be applied towards this.Data‘ was collected from library users, viz; student as well as faculty users; library ,professionals and university librarians, using structured questionnaires. This has been .supplemented by-observation of working of the libraries, discussions and interviews with the different types of users and staff, review of literature, etc. Personal observation of the organization set up, management practices, functions, facilities, resources, utilization of information resources and facilities by the users, etc. of the university libraries in Kerala have been made. Statistical techniques like percentage, mean, weighted mean, standard deviation, correlation, trend analysis, etc. have been used to analyse data.All the libraries could exploit only a very few possibilities of modern lCTs and hence they could not achieve effective Universal Bibliographic Control and desired efficiency and effectiveness in services. Because of this, the users as well as professionals are dissatisfied. Functional effectiveness in acquisition, access and process of information resources in various formats, development and maintenance of OPAC and WebOPAC, digital document delivery to remote users, Web based clearing of library counter services and resources, development of full-text databases, digital libraries and institutional repositories, consortia based operations for e-journals and databases, user education and information literacy, professional development with stress on lCTs, network administration and website maintenance, marketing of information, etc. are major areas need special attention to improve the situation. Finance, knowledge level on ICTs among library staff, professional dynamism and leadership, vision and support of the administrators and policy makers, prevailing educational set up and social environment in the state, etc. are some of the major hurdles in reaping the maximum possibilities of lCTs by the university libraries in Kerala. The principles of Business Process Re-engineering are found suitable to effectively apply to re-structure and redefine the operations and service system of the libraries. Most of the conventional departments or divisions prevailing in the university libraries were functioning as watertight compartments and their existing management system was more rigid to adopt the principles of change management. Hence, a thorough re-structuring of the divisions was indicated. Consortia based activities and pooling and sharing of information resources was advocated to meet the varied needs of the users in the main campuses and off campuses of the universities, affiliated colleges and remote stations. A uniform staff policy similar to that prevailing in CSIR, DRDO, ISRO, etc. has been proposed by the study not only in the university libraries in kerala but for the entire country.Restructuring of Lis education,integrated and Planned development of school,college,research and public library systems,etc.were also justified for reaping maximum benefits of the modern ICTs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We design and implement a system that recommends musicians to listeners. The basic idea is to keep track of what artists a user listens to, to find other users with similar tastes, and to recommend other artists that these similar listeners enjoy. The system utilizes a client-server architecture, a web-based interface, and an SQL database to store and process information. We describe Audiomomma-0.3, a proof-of-concept implementation of the above ideas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La asignatura troncal “Evaluación Psicológica” de los estudios de Psicología y del estudio de grado “Desarrollo humano en la sociedad de la información” de la Universidad de Girona consta de 12 créditos según la Ley Orgánica de Universidades. Hasta el año académico 2004-05 el trabajo no presencial del alumno consistía en la realización de una evaluación psicológica que se entregaba por escrito a final de curso y de la cual el estudiante obtenía una calificación y revisión si se solicitaba. En el camino hacia el Espacio Europeo de Educación Superior, esta asignatura consta de 9 créditos que equivalen a un total de 255 horas de trabajo presencial y no presencial del estudiante. En los años académicos 2005-06 y 2006-07 se ha creado una guía de trabajo para la gestión de la actividad no presencial con el objetivo de alcanzar aprendizajes a nivel de aplicación y solución de problemas/pensamiento crítico (Bloom, 1975) siguiendo las recomendaciones de la Agencia para la Calidad del Sistema Universitario de Cataluña (2005). La guía incorpora: los objetivos de aprendizaje, los criterios de evaluación, la descripción de las actividades, el cronograma semanal de trabajos para todo el curso, la especificación de las tutorías programadas para la revisión de los diversos pasos del proceso de evaluación psicológica y el uso del foro para el conocimiento, análisis y crítica constructiva de las evaluaciones realizadas por los compañeros

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MapFish is an open-source development framework for building webmapping applications. MapFish is based on the OpenLayers API and the Geo extension of Ext library, and extends the Pylons general-purpose web development framework with geo-specific functionnalities. This presentation first describes what the MapFish development framework provides and how it can help developers implement rich web-mapping applications. It then demonstrates through real web-mapping realizations what can be achieved using MapFish : Geo Business Intelligence applications, 2D/3D data visualization, on/off line data edition, advanced vectorial print functionnalities, advanced administration suite to build WebGIS applications from scratch, etc. In particular, the web-mapping application for the UN Refugee Agency (UNHCR) and a Regional Spatial Data Infrastructure will be demonstrated

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a first approach of Evaluation Engine Architecture (EEA) as proposal to support adaptive integral assessment, in the context of a virtual learning environment. The goal of our research is design an evaluation engine tool to assist in the whole assessment process within the A2UN@ project, linking that tool with the other key elements of a learning design (learning task, learning resources and learning support). The teachers would define the relation between knowledge, competencies, activities, resources and type of assessment. Providing this relation is possible obtain more accurate estimations of student's knowledge for adaptive evaluations and future recommendations. The process is supported by usage of educational standards and specifications and for an integral user modelling

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is focused on the robot mobile platform PRIM (platform robot information multimedia). This robot has been made in order to cover two main needs of our group, on one hand the need for a full open mobile robotic platform that is very useful in fulfilling the teaching and research activity of our school community, and on the other hand with the idea of introducing an ethical product which would be useful as mobile multimedia information point as a service tool. This paper introduces exactly how the system is made up and explains just what the philosophy is behind this work. The navigation strategies and sensor fusion, where machine vision system is the most important one, are oriented towards goal achievement and are the key to the behaviour of the robot