21 resultados para Requirements elicitation techniques
em Universidad Politécnica de Madrid
Resumo:
Context: This research deals with requirements elicitation technique selection for software product requirements and the overselection of open interviews. Objectives: This paper proposes and validates a framework to help requirements engineers select the most adequate elicitation techniques at any time. Method: We have explored both the existing underlying theory and the results of empirical research to build the framework. Based on this, we have deduced and put together justified proposals about the framework components. We have also had to add information not found in theoretical or empirical sources. In these cases, we drew on our own experience and expertise. Results: A new validated approach for requirements technique selection. This new approach selects tech- niques other than open interview, offers a wider range of possible techniques and captures more require- ments information. Conclusions: The framework is easily extensible and changeable. Whenever any theoretical or empirical evidence for an attribute, technique or adequacy value is unearthed, the information can be easily added to the framework.
Resumo:
Interviews are the most widely used elicitation technique in Requirements Engineering (RE). Despite its importance, research in interviews is quite limited, in particular from an experimental perspective. We have performed a series of experiments exploring the relative effectiveness of structured and unstructured interviews. This line of research has been active in Information Systems in the past years, so that our experiments can be aggregated together with existing ones to obtain guidelines for practice. Experimental aggregation is a demanding task. It requires not only a large number of experiments, but also considering the influence of the existing moderators. However, in the current state of the practice in RE, those moderators are unknown. We believe that analyzing the threats to validity in interviewing experiments may give insight about how to improve further replications and the corresponding aggregations. It is likely that this strategy may be applied in other Software Engineering areas as well.
Resumo:
La usabilidad es un atributo de calidad de un sistema software que llega a ser crítico en sistemas altamente interactivos. Desde el campo de la Interacción Persona-Ordenador se proponen recomendaciones que permiten alcanzar un nivel adecuado de usabilidad en un sistema. En la disciplina de la Ingeniería de Software se ha establecido que algunas de estas recomendaciones afectan a la funcionalidad principal de los sistemas y no solo a la interfaz de usuario. Este tipo de recomendaciones de usabilidad se deben tener en cuenta desde las primeras actividades y durante todo el proceso de desarrollo, así como se hace con atributos tales como la seguridad, la facilidad de mantenimiento o el rendimiento. Desde la Ingeniería de Software se han hecho estudios y propuestas para abordar la usabilidad en las primeras actividades del desarrollo. En particular en la educción de requisitos y diseño de la arquitectura. Estas propuestas son de un alto nivel de abstracción. En esta investigación se aborda la usabilidad en actividades avanzadas del proceso de desarrollo: el diseño detallado y la programación. El objetivo de este trabajo es obtener, formalizar y validar soluciones reutilizables para la usabilidad en estas actividades. En este estudio se seleccionan tres funcionalidades de usabilidad identificadas como de alto impacto en el diseño: Abortar Operación, Retroalimentación de Progreso y Preferencias. Para la obtención de elementos reutilizables se utiliza un método inductivo. Se parte de la construcción de aplicaciones web particulares y se induce una solución general. Durante la construcción de las aplicaciones se mantiene la trazabilidad de los elementos relacionados con cada funcionalidad de usabilidad. Al finalizar se realiza un análisis de elementos comunes, y los hallazgos se formalizan como patrones de diseño orientados a la implementación y patrones de programación en cada uno de los lenguajes utilizados: PHP, VB .NET y Java. Las soluciones formalizadas como patrones se validan usando la metodología de estudio de casos. Desarrolladores independientes utilizan los patrones para la inclusión de las tres funcionalidades de usabilidad en dos nuevas aplicaciones web. Como resultado, los desarrolladores pueden usar con éxito las soluciones propuestas para dos de las funcionalidades: Abortar Operación y Preferencias. La funcionalidad Retroalimentación de Progreso no puede ser implementada completamente. Se concluye que es posible obtener elementos reutilizables para la implementación de cada funcionalidad de usabilidad. Estos elementos incluyen: escenarios de aplicación, que son la combinación de casuísticas que generan las funcionalidades de usabilidad, responsabilidades comunes necesarias para cubrir los escenarios, componentes comunes para cumplir con las responsabilidades, elementos de diseño asociados a los componentes y el código que implementa el diseño. Formalizar las soluciones como patrones resulta útil para comunicar los hallazgos a otros desarrolladores y los patrones se mejoran a través de su utilización en nuevos desarrollos. La implementación de funcionalidades de usabilidad presenta características que condicionan su reutilización, en particular, el nivel de acoplamiento de la funcionalidad de usabilidad con las funcionalidades de la aplicación, y la complejidad interna de la solución. ABSTRACT Usability is a critical quality attribute of highly interactive software systems. The humancomputer interaction field proposes recommendations for achieving an acceptable system usability level. The discipline of software engineering has established that some of these recommendations affect not only the user interface but also the core system functionality. This type of usability recommendations must be taken into account as of the early activities and throughout the software development process as in the case of attributes like security, ease of maintenance or performance. Software engineering has conducted studies and put forward proposals for tackling usability in the early development activities, particularly requirements elicitation and architecture design. These proposals have a high level of abstraction. This research addresses usability in later activities of the development process: detailed design and programming. The goal of this research is to discover, specify and validate reusable usability solutions for detailed design and programming. Abort Operation, Feedback and Preferences, three usability functionalities identified as having a high impact on design, are selected for the study. An inductive method, whereby a general solution is induced from particular web applications built for the purpose, is used to discover reusable elements. During the construction of the applications, the traceability of the elements related to each usability functionality is maintained. At the end of the process, the common and possibly reusable elements are analysed. The findings are specified as implementation-oriented design patterns and programming patterns for each of the languages used: PHP, VB .NET and Java. The solutions specified as patterns are validated using the case study methodology. Independent developers use the patterns in order to build the three usability functionalities into two new web applications. As a result, the developers successfully use the proposed solutions for two of the functionalities: Abort Operation and Preferences. The Progress Feedback functionality cannot be fully implemented. We conclude that it is possible to discover reusable elements for implementing each usability functionality. These elements include: application scenarios, which are combinations of cases that generate usability functionalities, common responsibilities to cover the scenarios, common components to fulfil the responsibilities, design elements associated with the components and code implementing the design. It is useful to specify solutions as patterns in order to communicate findings to other developers, and patterns improve through further use in other development projects. Reusability depends on the features of usability functionality implementation, particularly the level of coupling of the usability functionality with the application functionalities and the internal complexity of the solution.
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
This article shows how a very small company has tailored Scrum according to its own needs. The main additions made were the “sprint design” phase and the “sprint test” phase. Before the sprint 0, the requirements elicitation and the functional specification were made in order to meet deadlines and costs agreed with clients. Besides, the introduction of an agile project management tool has supported all the process and it is considered the main success factor for the institutionalization of the Scrum process.
Resumo:
La calidad es uno de los principales retos de la construcción de software. En la Ingeniería del Software (IS) se considera a la usabilidad como un atributo de calidad. Al principio se veía a la usabilidad como un requisito no funcional.Se asumía que la usabilidad era una propiedad exclusiva de la presentación de la información.Se creía que separando la capa de presentación del resto, se podía desarrollar un producto software usable.Debido a la naturaleza del sistema y a las necesidades del usuario, a menudo se debe ir más lejos y no basta con tener en cuenta la presentación para obtener un software usable. La comunidad de la Interacción Personar Ordenador (IPO) ha propuesto recomendaciones para mejorar la usabilidad. Algunas de esas recomendaciones tienen impacto directo en la funcionalidad del producto software. En estudios recientes también se ha evaluado la relación entre la usabilidad y los requisitos funcionales. Estas investigaciones sugieren que la usabilidad debe ser tenida en cuenta desde las etapas iniciales de la construcción para evitar costosos cambios posteriores. La incorporación de las características de usabilidad agrega cierta complejidad al proceso de desarrollo. El presente trabajo evalúa la posibilidad de usar patrones para la incorporación de usabilidad en el desarrollo de un producto software. Concretamente se evalúan los siguientes patrones de programación de usabilidad (PPUs): Abort Operation,Progress Feedback y Preferences. Se utilizan unas Pautas de Desarrollo de Mecanismos de Usabilidad(PDMUs) para estos tres mecanismos de usabilidad. Estas pautas poponen patrones para la educción y posterior incorporación de la usabilidad en las distintas fases de la programación. En esta investigación se aborda el desarrollo de un producto software desde la deducción de requisitos hasta la implementación. En cada fase se incorporan los mecanismos de usabilidad de acuerdo a las recomendaciones de las PDMUs. Mediante el desarrollo de un software real se ha evaluado la factibilidad del uso de las PDMUs obteniendo como resultado propuestas de mejoras en estas pautas. Se evalúa asimismo el esfuerzo de incorporación de los mecanismos de usabilidad. Cada evaluación aporta datos que proporcionan una estimación del esfuerzo adicional requerido para incorporar cada mecanismo de usabilidad en el proceso de desarrollo del software.---ABSTRACT---Quality is a major challenge in software construction. Software engineers consider usability to be a quality attribute. Originally, usability was viewed as a nonr functional requirement. Usability was assumed to be simply an information presentation property. There was a belief that a usable software product could be developed by separating the presentation layer from the rest of the system. Depending on the system type and user needs, however, usability often runs deeper, and it is not enough to consider just presentation to build usable software. The humanrcomputer interaction (HCI) community put forward a list of recommendations to improve usability. Some such recommendations have a direct impact on software product functionality. Recent studies have also evaluated the relationship between usability and functional requirements. This research suggests that usability should be taken into account as of the early stages of software construction to prevent costly rework later on. The inclusion of usability features is an added complication to the development process. The research reported here evaluates the possibility of using patterns to incorporate usability into a software product. Specifically, it evaluates the following usability programming patterns (UPPs): Abort Operation, Progress Feedback and Preferences. Usability Mechanism Development Guides (USDG) are applied to these three usability mechanisms. These guides propose patterns for eliciting and later incorporating usability into the different software development phases, including programming. The reported research addresses the development of a software product from requirements elicitation through to implementation. Usability mechanisms are incorporated into each development phase in accordance with USDG recommendations. A real piece of software was developed to test the feasibility of using USDGs, outputting proposals for improving the guides. Each evaluation yields data providing an estimate of the additional workload required to incorporate each usability mechanism into the software development process.
Resumo:
Contexto: La presente tesis doctoral se enmarca en la actividad de educción de los requisitos. La educción de requisitos es generalmente aceptada como una de las actividades más importantes dentro del proceso de Ingeniería de Requisitos, y tiene un impacto directo en la calidad del software. Es una actividad donde la comunicación entre los involucrados (analistas, clientes, usuarios) es primordial. La efectividad y eficacia del analista en la compresión de las necesidades de clientes y usuarios es un factor crítico para el éxito del desarrollo de software. La literatura se ha centrado principalmente en estudiar y comprender un conjunto específico de capacidades o habilidades personales que debe poseer el analista para realizar de forma efectiva la actividad de educción. Sin embargo, existen muy pocos trabajos que han estudiado dichas capacidades o habilidades empíricamente. Objetivo: La presente investigación tiene por objetivo estudiar el efecto de la experiencia, el conocimiento acerca del dominio y la titulación académica que poseen los analistas en la efectividad del proceso de educción de los requisitos, durante los primeros contactos del analista con el cliente. Método de Investigación: Hemos ejecutado 8 estudios empíricos entre cuasi-experimentos (4) y experimentos controlados (4). Un total de 110 sujetos experimentales han participado en los estudios, entre estudiantes de post-grado de la Escuela Técnica Superior de Ingenieros Informáticos de la Universidad Politécnica de Madrid y profesionales. La tarea experimental consistió en realizar sesiones de educción de requisitos sobre uno o más dominios de problemas (de carácter conocido y desconocido para los sujetos). Las sesiones de educción se realizaron empleando la entrevista abierta. Finalizada la entrevista, los sujetos reportaron por escrito toda la información adquirida. Resultados: Para dominios desconocidos, la experiencia (entrevistas, requisitos, desarrollo y profesional) del analista no influye en su efectividad. En dominios conocidos, la experiencia en entrevistas (r = 0.34, p-valor = 0.080) y la experiencia en requisitos (r = 0.22, p-valor = 0.279), ejercen un efecto positivo. Esto es, los analistas con más años de experiencia en entrevistas y/o requisitos tienden a alcanzar mejores efectividades. Por el contrario, la experiencia en desarrollo (r = -0.06, p-valor = 0.765) y la experiencia profesional (r = -0.35, p-valor = 0.077), tienden a ejercer un efecto nulo y negativo, respectivamente. En lo que respecta al conocimiento acerca del dominio del problema que poseen los analistas, ejerce un moderado efecto positivo (r=0.31), estadísticamente significativo (p-valor = 0.029) en la efectividad de la actividad de educción. Esto es, los analistas con conocimiento tienden a ser más efectivos en los dominios de problema conocidos. En lo que respecta a la titulación académica, por falta de diversidad en las titulaciones académicas de los sujetos experimentales no es posible alcanzar una conclusión. Hemos podido explorar el efecto de la titulación académica en sólo dos cuasi-experimentos, sin embargo, nuestros resultados arrojan efectos contradictorios (r = 0.694, p-valor = 0.51 y r = -0.266, p-valor = 0.383). Además de las variables estudiadas indicadas anteriormente, hemos confirmado la existencia de variables moderadoras que afectan a la actividad de educción, tales como el entrevistado o la formación. Nuestros datos experimentales confirman que el entrevistado es un factor clave en la actividad de educción. Estadísticamente ejerce una influencia significativa en la efectividad de los analistas (p-valor= 0.000). La diferencia entre entrevistar a uno u otro entrevistado, en unidades naturales, varía entre un 18% - 23% en efectividad. Por otro lado, la formación en requisitos aumenta considerablemente la efectividad de los analistas. Los sujetos que realizaron la educción de requisitos después de recibir una formación específica en requisitos tienden a ser entre un 12% y 20% más efectivos que aquellos que no la recibieron. El efecto es significativo (p-valor = 0.000). Finalmente, hemos observado tres hechos que podrían influir en los resultados de esta investigación. En primer lugar, la efectividad de los analistas es diferencial dependiendo del tipo de elemento del dominio. En dominios conocidos, los analistas con experiencia tienden a adquirir más conceptos que los analistas noveles. En los dominios desconocidos, son los procesos los que se adquieren de forma prominente. En segundo lugar, los analistas llegan a una especie de “techo de cristal” que no les permite adquirir más información. Es decir, el analista sólo reconoce (parte de) los elementos del dominio del problema mencionado. Este hecho se observa tanto en el dominio de problema desconocido como en el conocido, y parece estar relacionado con el modo en que los analistas exploran el dominio del problema. En tercer lugar, aunque los años de experiencia no parecen predecir cuán efectivo será un analista, sí parecen asegurar que un analista con cierta experiencia, en general, tendrá una efectividad mínima que será superior a la efectividad mínima de los analistas con menos experiencia. Conclusiones: Los resultados obtenidos muestran que en dominios desconocidos, la experiencia por sí misma no determina la efectividad de los analistas de requisitos. En dominios conocidos, la efectividad de los analistas se ve influenciada por su experiencia en entrevistas y requisitos, aunque sólo parcialmente. Otras variables influyen en la efectividad de los analistas, como podrían ser las habilidades débiles. El conocimiento del dominio del problema por parte del analista ejerce un efecto positivo en la efectividad de los analistas, e interacciona positivamente con la experiencia incrementando aún más la efectividad de los analistas. Si bien no fue posible obtener conclusiones sólidas respecto al efecto de la titulación académica, si parece claro que la formación específica en requisitos ejerce una importante influencia positiva en la efectividad de los analistas. Finalmente, el analista no es el único factor relevante en la actividad de educción. Los clientes/usuarios (entrevistados) también juegan un rol importante en el proceso de generación de información. ABSTRACT Context: This PhD dissertation addresses requirements elicitation activity. Requirements elicitation is generally acknowledged as one of the most important activities of the requirements process, having a direct impact in the software quality. It is an activity where the communication among stakeholders (analysts, customers, users) is paramount. The analyst’s ability to effectively understand customers/users’ needs represents a critical factor for the success of software development. The literature has focused on studying and comprehending a specific set of personal skills that the analyst must have to perform requirements elicitation effectively. However, few studies have explored those skills from an empirical viewpoint. Goal: This research aims to study the effects of experience, domain knowledge and academic qualifications on the analysts’ effectiveness when performing requirements elicitation, during the first stages of analyst-customer interaction. Research method: We have conducted eight empirical studies, quasi-experiments (four) and controlled experiments (four). 110 experimental subjects participated, including: graduate students with the Escuela Técnica Superior de Ingenieros Informáticos of the Universidad Politécnica de Madrid, as well as researchers and professionals. The experimental tasks consisted in elicitation sessions about one or several problem domains (ignorant and/or aware for the subjects). Elicitation sessions were conducted using unstructured interviews. After each interview, the subjects reported in written all collected information. Results: In ignorant domains, the analyst’s experience (interviews, requirements, development and professional) does not influence her effectiveness. In aware domains, interviewing experience (r = 0.34, p-value = 0.080) and requirements experience (r = 0.22, p-value = 0.279), make a positive effect, i.e.: the analysts with more years of interviewing/requirements experience tend to achieve higher effectiveness. On the other hand, development experience (r = -0.06, p-value = 0.765) and professional experience (r = -0.35, p-value = 0.077) tend to make a null and negative effect, respectively. On what regards the analyst’s problem domain knowledge, it makes a modest positive effect (r=0.31), statistically significant (p-value = 0.029) on the effectiveness of the elicitation activity, i.e.: the analysts with tend to be more effective in problem domains they are aware of. On what regards academic qualification, due to the lack of diversity in the subjects’ academic degrees, we cannot come to a conclusion. We have been able to explore the effect of academic qualifications in two experiments; however, our results show opposed effects (r = 0.694, p-value = 0.51 y r = -0.266, p-value = 0.383). Besides the variables mentioned above, we have confirmed the existence of moderator variables influencing the elicitation activity, such as the interviewee and the training. Our data confirm that the interviewee is a key factor in the elicitation activity; it makes statistically significant effect on analysts’ effectiveness (p-value = 0.000). Interviewing one or another interviewee represents a difference in effectiveness of 18% - 23%, in natural units. On the other hand, requirements training increases to a large extent the analysts’ effectiveness. Those subjects who performed requirements elicitation after specific training tend to be 12% - 20% more effective than those who did not receive training. The effect is statistically significant (p-value = 0.000). Finally, we have observed three phenomena that could have an influence on the results of this research. First, the analysts’ effectiveness differs depending on domain element types. In aware domains, experienced analysts tend to capture more concepts than novices. In ignorant domains, processes are identified more frequently. Second, analysts get to a “glass ceiling” that prevents them to acquire more information, i.e.: analysts only identify (part of) the elements of the problem domain. This fact can be observed in both the ignorant and aware domains. Third, experience years do not look like a good predictor of how effective an analyst will be; however, they seem to guarantee that an analyst with some experience years will have a higher minimum effectiveness than the minimum effectiveness of analysts with fewer experience years. Conclusions: Our results point out that experience alone does not explain analysts’ effectiveness in ignorant domains. In aware domains, analysts’ effectiveness is influenced the experience in interviews and requirements, albeit partially. Other variables influence analysts’ effectiveness, e.g.: soft skills. The analysts’ problem domain knowledge makes a positive effect in analysts’ effectiveness; it positively interacts with the experience, increasing even further analysts’ effectiveness. Although we could not obtain solid conclusions on the effect of the academic qualifications, it is plain clear that specific requirements training makes a rather positive effect on analysts’ effectiveness. Finally, the analyst is not the only relevant factor in the elicitation activity. The customers/users (interviewees) play also an important role in the information generation process.
Resumo:
In this paper we want to point out, by means of a case study, the importance of incorporating some knowledge engineering techniques to the processes of software engineering. Precisely, we are referring to the knowledge eduction techniques. We know the difficulty of requirements acquisition and its importance to minimise the risks of a software project, both in the development phase and in the maintenance phase. To capture the functional requirements use cases are generally used. However, as we will show in this paper, this technique is insufficient when the problem domain knowledge is only in the "experts? mind". In this situation, the combination of the use case with eduction techniques, in every development phase, will let us to discover the correct requirements.
Resumo:
Silicon wafers comprise approximately 40% of crystalline silicon module cost, and represent an area of great technological innovation potential. Paradoxically, unconventional wafer-growth techniques have thus far failed to displace multicrystalline and Czochralski silicon, despite four decades of innovation. One of the shortcomings of most unconventional materials has been a persistent carrier lifetime deficit in comparison to established wafer technologies, which limits the device efficiency potential. In this perspective article, we review a defect-management framework that has proven successful in enabling millisecond lifetimes in kerfless and cast materials. Control of dislocations and slowly diffusing metal point defects during growth, coupled to effective control of fast-diffusing species during cell processing, is critical to enable high cell efficiencies. To accelerate the pace of novel wafer development, we discuss approaches to rapidly evaluate the device efficiency potential of unconventional wafers from injection-dependent lifetime measurements.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Dislocation mobility —the relation between applied stress and dislocation velocity—is an important property to model the mechanical behavior of structural materials. These mobilities reflect the interaction between the dislocation core and the host lattice and, thus, atomistic resolution is required to capture its details. Because the mobility function is multiparametric, its computation is often highly demanding in terms of computational requirements. Optimizing how tractions are applied can be greatly advantageous in accelerating convergence and reducing the overall computational cost of the simulations. In this paper we perform molecular dynamics simulations of ½ 〈1 1 1〉 screw dislocation motion in tungsten using step and linear time functions for applying external stress. We find that linear functions over time scales of the order of 10–20 ps reduce fluctuations and speed up convergence to the steady-state velocity value by up to a factor of two.
Resumo:
The geometrical factors defining an adhesive joint are of great importance as its design greatly conditions the performance of the bonding. One of the most relevant geometrical factors is the thickness of the adhesive as it decisively influences the mechanical properties of the bonding and has a clear economic impact on the manufacturing processes or long runs. The traditional mechanical joints (riveting, welding, etc.) are characterised by a predictable performance, and are very reliable in service conditions. Thus, structural adhesive joints will only be selected in industrial applications demanding mechanical requirements and adverse environmental conditions if the suitable reliability (the same or higher than the mechanical joints) is guaranteed. For this purpose, the objective of this paper is to analyse the influence of the adhesive thickness on the mechanical behaviour of the joint and, by means of a statistical analysis based on Weibull distribution, propose the optimum thickness for the adhesive combining the best mechanical performance and high reliability. This procedure, which is applicable without a great deal of difficulty to other joints and adhesives, provides a general use for a more reliable use of adhesive bondings and, therefore, for a better and wider use in the industrial manufacturing processes.
Resumo:
The area of Human-Machine Interface is growing fast due to its high importance in all technological systems. The basic idea behind designing human-machine interfaces is to enrich the communication with the technology in a natural and easy way. Gesture interfaces are a good example of transparent interfaces. Such interfaces must identify properly the action the user wants to perform, so the proper gesture recognition is of the highest importance. However, most of the systems based on gesture recognition use complex methods requiring high-resource devices. In this work, we propose to model gestures capturing their temporal properties, which significantly reduce storage requirements, and use clustering techniques, namely self-organizing maps and unsupervised genetic algorithm, for their classification. We further propose to train a certain number of algorithms with different parameters and combine their decision using majority voting in order to decrease the false positive rate. The main advantage of the approach is its simplicity, which enables the implementation using devices with limited resources, and therefore low cost. The testing results demonstrate its high potential.
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.