791 resultados para Aspect-Oriented Software Development
Resumo:
With the availability of lower cost but highly skilled software development labor from offshore regions, entrepreneurs from developed countries who do not have software development experience can utilize this workforce to develop innovative software products. In order to succeed in offshored innovation projects, the often extreme knowledge boundaries between the onsite entrepreneur and the offshore software development team have to be overcome. Prior research has proposed that boundary objects are critical for bridging such boundaries – if they are appropriately used. Our longitudinal, revelatory case study of a software innovation project is one of the first to explore the role of the software prototype as a digital boundary object. Our study empirically unpacks five use practices that transform the software prototype into a boundary object such that knowledge boundaries are bridged. Our findings provide new theoretical insights for literature on software innovation and boundary objects, and have implications for practice.
Resumo:
If we postulate a need for the transformation of society towards sustainable development, we also need to transform science and overcome the fact/value split that makes it impossible for science to be accountable to society. The orientation of this paradigm transformation in science has been under debate for four decades, generating important theoretical concepts, but they have had limited impact until now. This is due to a contradictory normative science policy framing that science has difficulties dealing with, not least of all because the dominant framing creates a lock-in. We postulate that in addition to introducing transdisciplinarity, science needs to strive for integration of the normative aspect of sustainable development at the meta-level. This requires a strategically managed niche within which scholars and practitioners from many different disciplines can engage in a long-term common learning process, in order to become a “thought collective” (Fleck) capable of initiating the paradigm transformation. Arguing with Piaget that “decentration” is essential to achieve normative orientation and coherence in a learning collective, we introduce a learning approach—Cohn's “Theme-Centred Interaction”—which provides a methodology for explicitly working with the objectivity and subjectivity of statements and positions in a “real-world” context, and for consciously integrating concerns of individuals in their interdependence with the world. This should enable a thought collective to address the epistemological and ethical barriers to science for sustainable development.
Resumo:
In this study, the work and life of Indian IT engineers in Japan engaged in software development were examined through a questionnaire survey. Findings were further supported by comparative analyses with Chinese and Korean software engineers. While Indian IT software engineers appeared rather satisfied with their life overall in Japan, they seemed rather dissatisfied with their work conditions including such things as fringe benefits, the working-time management of the company, levels of salary and bonuses, and promotion opportunities. It was made clear that profiles and perceptions of Indian engineers and those of Chinese and Koreans in Japan were different.
Resumo:
Software Product Line Engineering (SPLE) is becoming widely used due to the improvement it means when developing software products of the same family. However, SPLE demands long-term investment on a product-line platform that might not be profitable due to rapid changing business settings. Since Agile Software Development (ASD) approaches are being successfully applied in volatile markets, several companies have suggested the idea of integrating SPLE and ASD when a family product has to be developed. Agile Product Line Engineering (APLE) advocates the integration of SPLE and ASD to address their lacks when they are individually applied to software development. A previous literature re-view of experiences and practices on APLE revealed important challenges about how to fully put APLE into practice. Our contribution address several of these challenges by tailoring the agile method Scrum by means of three concepts that we have defined: plastic partial components, working PL-architectures, and reactive reuse.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
Tanto los robots autónomos móviles como los robots móviles remotamente operados se utilizan con éxito actualmente en un gran número de ámbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploración espacial. Sin embargo, es difícil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informáticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeño de las funciones para las que fue diseñado, siendo una de ellas la fiabilidad. En el caso de la mayoría de los sistemas físicos se detecta una degradación en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generación de un sistema software se focaliza la atención en la etapa de codificación, podría plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, válidos para desempeñar el mismo cometido, según los posibles defectos que pudieran introducir los programadores. Este estudio básico podría tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crítico o establecer procedimientos de verificación y validación, más exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigación se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot móvil específico. La hipótesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificación experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecían una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligió de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batería de pruebas más exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma más completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado número de pruebas con un robot real, así como para evitar daños en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtención de controladores bien ajustados, como en la realización de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development pro¬cess. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to pro¬gramming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
A set of software development tools for building real-time control systems on a simple robotics platform is described in the paper. The tools are being used in a real-time systems course as a basis for student projects. The development platform is a low-cost PC running GNU/Linux, and the target system is LEGO MINDSTORMS NXT, thus keeping the cost of the laboratory low. Real-time control software is developed using a mixed paradigm. Functional code for control algorithms is automatically generated in C from Simulink models. This code is then integrated into a concurrent, real-time software architecture based on a set of components written in Ada. This approach enables the students to take advantage of the high-level, model-oriented features that Simulink oers for designing control algorithms, and the comprehensive support for concurrency and real-time constructs provided by Ada.
Resumo:
The evolution of communications networks to Next Generation Networks (NGN) has encouraged the development of new services. Nowadays, several technologies are being integrated into telecommunications services in order to provide new functionalities, resulting in what are known as converged services. The objective is to adapt the behavior of the services to the necessities of different users, generating customized services. Some of the main technologies involved in their development are those related to the Web. But due to this type of services implies the combination of different technologies, their development is a very complex process that has to be improved to reduce the time and cost required, with the aim of promoting the success of such services. This paper proposes to apply software reuse through the utilization of a component library and presents one focused on ECharts for SIP Servlets (E4SS). It is a framework, based on the SIP Servlet specification, which uses finite state machines for the definition of converged communications services. Also, to promote the use of the library, a methodology is proposed in order to facilitate the integration between the library operations and the software development cycle.
Resumo:
This dissertation discusses how different practitioners define project success and success factors for software projects and products. The motivation for this work is to identify the way software practitioners’ value and define project success. This can have implications for both practitioner motivation and software development productivity. Accordingly, in this work, we are interested in the various perceptions of the term “success” for different software practitioners and researchers. To get this information we performed a systematic mapping of the recent year’s software development literature trying to identify stakeholders’ perceptions about the success of a project and also possible differences among the views of the various stakeholders of a project. Some common terms related to project success (success project; software project success factors) were considered in formulating the search strings. The results were limited to twenty-two selected peer-reviewed conferences, papers/journal articles, published between 2003 and 2012.
Resumo:
Actualmente existen multitud de aplicaciones creadas para la gestión de proyectos software; cada una de ellas pretende dar solución y facilitar las tareas propias de los gestores y los desarrolladores pertenecientes a los equipos de desarrollo. Los equipos de desarrollo software suelen estar integrados por gran variedad de recursos, tanto humanos como materiales. Cada uno desempeña una función concreta en el proyecto, pudiendo no tener una dedicación plena al proyecto. Por eso, es necesario que dichos recursos sean compartidos entre la cartera proyectos existentes. Para resolver este planteamiento en las aplicaciones de gestión de proyectos, ha sido requisito fundamental que se puedan gestionar varios proyectos de forma simultánea (gestión multiproyecto), pudiendo repartir la dedicación de los recursos entre los proyectos existentes en la cartera. En la actualidad, existe un gran número de metodologías de gestión de proyectos, por lo que, en parte, el éxito del proyecto radica en la elección de la más adecuada. Entre todas las metodologías existentes, este estudio se ha centrado en las cada vez más utilizadas metodologías de gestión de proyectos ágiles; se describe en qué consisten, qué beneficios aportan frente a las metodologías clásicas y cuáles son las más utilizadas por sus ya contrastados beneficios y el valor que aportan a la gestión de proyectos. Por lo descrito anteriormente, otro requisito fundamental a la hora de valorar las aplicaciones de gestión de proyectos ha sido la capacidad de soportar y aplicar metodologías ágiles de gestión de proyectos. En este estudio también se ha tenido en cuenta el tipo de aplicación atendiendo a su instalación y acceso, y se ha realizado la diferenciación entre aplicaciones web- las cuales precisan ser instaladas en un servidor web y son accesibles desde cualquier dispositivo con navegador -, y aplicaciones de escritorio - las cuales precisan estar instaladas en un equipo de forma local y sólo pueden ser accedidas a ellas desde dicho equipo. En este estudio se han evaluado varias aplicaciones, intentando analizar el cumplimiento de las características comentadas anteriormente, dando como resultado tres aplicaciones seleccionadas siendo éstas las que pueden aportar más valor a la hora de gestionar una cartera de proyectos. ABSTRACT. At present, there are many applications aimed at managing software projects. Every application intends to solve and facilitate tasks to managers and developers belonging to the development teams. Software development teams are usually made up of many different human and material resources, each of them developing a specific task in the project and sometimes without a full dedication to the project. Therefore, these resources have to be shared within the existing project portfolio. To meet this need in project management applications, the main requirement is to be able to manage several projects simultaneously (multi-project management), thus allowing resources to be shared within the existing project portfolio. At present, there are a large number of project management methodologies and the success of the project lies in choosing the most appropriate one. Among all the existing methodologies, this study has focused on the increasingly used agile project management methodologies. The study describes the way they work, their added value in comparison traditional methodologies, and which ones are more often used due to their already verified benefits and value in managing projects. Taking into account the above-mention characteristics, another key requirement when assessing the project management applications has been their capacity to support and implement project management agile methodologies. This study has also taken into account the type of application according to its installation and access. A difference is established between web applications – which require to be installed in a web server and are accessible from any device with a web browser – and desktop applications, which must be installed in the equipment to be used and are only accessible from this equipment. The study has assessed several applications by analyzing the compliance with the above-mentioned characteristics and has chosen three applications that provide the management of the project portfolio with an added value.
Resumo:
The worldwide "hyper-connection" of any object around us is the challenge that promises to cover the paradigm of the Internet of Things. If the Internet has colonized the daily life of more than 2000 million1 people around the globe, the Internet of Things faces of connecting more than 100000 million2 "things" by 2020. The underlying Internet of Things’ technologies are the cornerstone that promises to solve interrelated global problems such as exponential population growth, energy management in cities, and environmental sustainability in the average and long term. On the one hand, this Project has the goal of knowledge acquisition about prototyping technologies available in the market for the Internet of Things. On the other hand, the Project focuses on the development of a system for devices management within a Wireless Sensor and Actuator Network to offer some services accessible from the Internet. To accomplish the objectives, the Project will begin with a detailed analysis of various “open source” hardware platforms to encourage creative development of applications, and automatically extract information from the environment around them for transmission to external systems. In addition, web platforms that enable mass storage with the philosophy of the Internet of Things will be studied. The project will culminate in the proposal and specification of a service-oriented software architecture for embedded systems that allows communication between devices on the network, and the data transmission to external systems. Furthermore, it abstracts the complexities of hardware to application developers. RESUMEN. La “hiper-conexión” a nivel mundial de cualquier objeto que nos rodea es el desafío al que promete dar cobertura el paradigma de la Internet de las Cosas. Si la Internet ha colonizado el día a día de más de 2000 millones1 de personas en todo el planeta, la Internet de las Cosas plantea el reto de conectar a más de 100000 millones2 de “cosas” para el año 2020. Las tecnologías subyacentes de la Internet de las Cosas son la piedra angular que prometen dar solución a problemas globales interrelacionados como el crecimiento exponencial de la población, la gestión de la energía en las ciudades o la sostenibilidad del medioambiente a largo plazo. Este Proyecto Fin de Carrera tiene como principales objetivos por un lado, la adquisición de conocimientos acerca de las tecnologías para prototipos disponibles en el mercado para la Internet de las Cosas, y por otro lado el desarrollo de un sistema para la gestión de dispositivos de una red inalámbrica de sensores que ofrezcan unos servicios accesibles desde la Internet. Con el fin de abordar los objetivos marcados, el proyecto comenzará con un análisis detallado de varias plataformas hardware de tipo “open source” que estimulen el desarrollo creativo de aplicaciones y que permitan extraer de forma automática información del medio que les rodea para transmitirlo a sistemas externos para su posterior procesamiento. Por otro lado, se estudiarán plataformas web identificadas con la filosofía de la Internet de las Cosas que permitan el almacenamiento masivo de datos que diferentes plataformas hardware transfieren a través de la Internet. El Proyecto culminará con la propuesta y la especificación una arquitectura software orientada a servicios para sistemas empotrados que permita la comunicación entre los dispositivos de la red y la transmisión de datos a sistemas externos, así como facilitar el desarrollo de aplicaciones a los programadores mediante la abstracción de la complejidad del hardware.
Resumo:
Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.
Resumo:
Software needs to be accessible for persons with disabilities and there are several guidelines to assist developers in building more accessible software. Regulation activities are beginning to make the accessibility of software a mandatory requirement in some countries. One such activity is the European Mandate M 376, which will result in a European standard (EN 301 549) defining functional accessibility requirements for information and communication technology products and services. This paper provides an overview of Mandate M 376 and EN 301 549, and describes the requirements for software accessibility defined in EN 301 549, according to a feature-based approach