998 resultados para software evolution
Resumo:
To what extent is “software engineering” really “engineering” as this term is commonly understood? A hallmark of the products of the traditional engineering disciplines is trustworthiness based on dependability. But in his keynote presentation at ICSE 2006 Barry Boehm pointed out that individuals’, systems’, and peoples’ dependency on software is becoming increasingly critical, yet that dependability is generally not the top priority for software intensive system producers. Continuing in an uncharacteristic pessimistic vein, Professor Boehm said that this situation will likely continue until a major software-induced system catastrophe similar in impact to the 9/11 World Trade Center catastrophe stimulates action toward establishing accountability for software dependability. He predicts that it is highly likely that such a software-induced catastrophe will occur between now and 2025. It is widely understood that software, i.e., computer programs, are intrinsically different from traditionally engineered products, but in one aspect they are identical: the extent to which the well-being of individuals, organizations, and society in general increasingly depend on software. As wardens of the future through our mentoring of the next generation of software developers, we believe that it is our responsibility to at least address Professor Boehm’s predicted catastrophe. Traditional engineering has, and continually addresses its social responsibility through the evolution of the education, practice, and professional certification/licensing of professional engineers. To be included in the fraternity of professional engineers, software engineering must do the same. To get a rough idea of where software engineering currently stands on some of these issues we conducted two surveys. Our main survey was sent to software engineering academics in the U.S., Canada, and Australia. Among other items it sought detail information on their software engineering programs. Our auxiliary survey was sent to U.S. engineering institutions to get some idea about how software engineering programs compared with those in established engineering disciplines of Civil, Electrical, and Mechanical Engineering. Summaries of our findings can be found in the last two sections of our paper.
Resumo:
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. Adaptation, reconfiguration and evolution are normal, ongoing processes throughout the lifecycle of a software system. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of modern software systems, we must make these systems more amenable to change by (i) providing explicit, first-class models of software artifacts, change, and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of dynamic, evolving software systems and identify the research challenges to realizing this vision.
Resumo:
Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.
Resumo:
Enterprise Applications are complex software systems that manipulate much persistent data and interact with the user through a vast and complex user interface. In particular applications written for the Java 2 Platform, Enterprise Edition (J2EE) are composed using various technologies such as Enterprise Java Beans (EJB) or Java Server Pages (JSP) that in turn rely on languages other than Java, such as XML or SQL. In this heterogeneous context applying existing reverse engineering and quality assurance techniques developed for object-oriented systems is not enough. Because those techniques have been created to measure quality or provide information about one aspect of J2EE applications, they cannot properly measure the quality of the entire system. We intend to devise techniques and metrics to measure quality in J2EE applications considering all their aspects and to aid their evolution. Using software visualization we also intend to inspect to structure of J2EE applications and all other aspects that can be investigate through this technique. In order to do that we also need to create a unified meta-model including all elements composing a J2EE application.
Resumo:
We present the results of an investigation into the nature of the information needs of software developers who work in projects that are part of larger ecosystems. In an open- question survey we asked framework and library developers about their information needs with respect to both their upstream and downstream projects. We investigated what kind of information is required, why is it necessary, and how the developers obtain this information. The results show that the downstream needs are grouped into three categories roughly corresponding to the different stages in their relation with an upstream: selection, adop- tion, and co-evolution. The less numerous upstream needs are grouped into two categories: project statistics and code usage. The current practices part of the study shows that to sat- isfy many of these needs developers use non-specific tools and ad hoc methods. We believe that this is a largely unexplored area of research.
Resumo:
BACKGROUND Quantitative light intensity analysis of the strut core by optical coherence tomography (OCT) may enable assessment of changes in the light reflectivity of the bioresorbable polymeric scaffold from polymer to provisional matrix and connective tissues, with full disappearance and integration of the scaffold into the vessel wall. The aim of this report was to describe the methodology and to apply it to serial human OCT images post procedure and at 6, 12, 24 and 36 months in the ABSORB cohort B trial. METHODS AND RESULTS In serial frequency-domain OCT pullbacks, corresponding struts at different time points were identified by 3-dimensional foldout view. The peak and median values of light intensity were measured in the strut core by dedicated software. A total of 303 corresponding struts were serially analyzed at 3 time points. In the sequential analysis, peak light intensity increased gradually in the first 24 months after implantation and reached a plateau (relative difference with respect to baseline [%Dif]: 61.4% at 12 months, 115.0% at 24 months, 110.7% at 36 months), while the median intensity kept increasing at 36 months (%Dif: 14.3% at 12 months, 75.0% at 24 months, 93.1% at 36 months). CONCLUSIONS Quantitative light intensity analysis by OCT was capable of detecting subtle changes in the bioresorbable strut appearance over time, and could be used to monitor the bioresorption and integration process of polylactide struts.
Resumo:
As a summary of past, current, and future trends in software maintenance and reengineering research, we give in this editorial a retrospective look from the past 14 years to now. We provide insight on how software maintenance has evolved and on the most important research topics presented in the series of the European Conference on Software Maintenance and Reengineering.
Resumo:
Software Product Line Engineering (SPLE) has proved to have significant advantages in family-based software development, but also implies the up¬front design of a product-line architecture (PLA) from which individual product applications can be engineered. The big upfront design associated with PLAs is in conflict with the current need of "being open to change". However, the turbulence of the current business climate makes change inevitable in order to stay competitive, and requires PLAs to be open to change even late in the development. The trend of "being open to change" is manifested in the Agile Software Development (ASD) paradigm, but it is spreading to the domain of SPLE. To reduce the big upfront design of PLAs as currently practiced in SPLE, new paradigms are being created, one being Agile Product Line Engineering (APLE). APLE aims to make the development of product-lines more flexible and adaptable to changes as promoted in ASD. To put APLE into practice it is necessary to make mechanisms available to assist and guide the agile construction and evolution of PLAs while complying with the "be open to change" agile principle. This thesis defines a process for "the agile construction and evolution of product-line architectures", which we refer to as Agile Product-Line Archi-tecting (APLA). The APLA process provides agile architects with a set of models for describing, documenting and tracing PLAs, as well as an algorithm to analyze change impact. Both the models and the change impact analysis offer the following capabilities: Flexibility & adaptability at the time of defining software architectures, enabling change during the incremental and iterative design of PLAs (anticipated or planned changes) and their evolution (unanticipated or unforeseen changes). Assistance in checking architectural integrity through change impact analysis in terms of architectural concerns, such as dependencies on earlier design decisions, rationale, constraints, and risks, etc.Guidance in the change decision-making process through change im¬pact analysis in terms of architectural components and connections. Therefore, APLA provides the mechanisms required to construct and evolve PLAs that can easily be refined iteration after iteration during the APLE development process. These mechanisms are provided in a modeling frame¬work called FPLA. The contributions of this thesis have been validated through the conduction of a project regarding a metering management system in electrical power networks. This case study took place in an i-smart software factory and was in collaboration with the Technical University of Madrid and Indra Software Labs. La Ingeniería de Líneas de Producto Software (Software Product Line Engi¬neering, SPLE) ha demostrado tener ventajas significativas en el desarrollo de software basado en familias de productos. SPLE es un paradigma que se basa en la reutilización sistemática de un conjunto de características comunes que comparten los productos de un mismo dominio o familia, y la personalización masiva a través de una variabilidad bien definida que diferencia unos productos de otros. Este tipo de desarrollo requiere el diseño inicial de una arquitectura de línea de productos (Product-Line Architecture, PLA) a partir de la cual los productos individuales de la familia son diseñados e implementados. La inversión inicial que hay que realizar en el diseño de PLAs entra en conflicto con la necesidad actual de estar continuamente "abierto al cam¬bio", siendo este cambio cada vez más frecuente y radical en la industria software. Para ser competitivos es inevitable adaptarse al cambio, incluso en las últimas etapas del desarrollo de productos software. Esta tendencia se manifiesta de forma especial en el paradigma de Desarrollo Ágil de Software (Agile Software Development, ASD) y se está extendiendo también al ámbito de SPLE. Con el objetivo de reducir la inversión inicial en el diseño de PLAs en la manera en que se plantea en SPLE, en los último años han surgido nuevos enfoques como la Ingeniera de Líneas de Producto Software Ágiles (Agile Product Line Engineering, APLE). APLE propone el desarrollo de líneas de producto de forma más flexible y adaptable a los cambios, iterativa e incremental. Para ello, es necesario disponer de mecanismos que ayuden y guíen a los arquitectos de líneas de producto en el diseño y evolución ágil de PLAs, mientras se cumple con el principio ágil de estar abierto al cambio. Esta tesis define un proceso para la "construcción y evolución ágil de las arquitecturas de lineas de producto software". A este proceso se le ha denominado Agile Product-Line Architecting (APLA). El proceso APLA proporciona a los arquitectos software un conjunto de modelos para de¬scribir, documentar y trazar PLAs, así como un algoritmo para analizar vel impacto del cambio. Los modelos y el análisis del impacto del cambio ofrecen: Flexibilidad y adaptabilidad a la hora de definir las arquitecturas software, facilitando el cambio durante el diseño incremental e iterativo de PLAs (cambios esperados o previstos) y su evolución (cambios no previstos). Asistencia en la verificación de la integridad arquitectónica mediante el análisis de impacto de los cambios en términos de dependencias entre decisiones de diseño, justificación de las decisiones de diseño, limitaciones, riesgos, etc. Orientación en la toma de decisiones derivadas del cambio mediante el análisis de impacto de los cambios en términos de componentes y conexiones. De esta manera, APLA se presenta como una solución para la construcción y evolución de PLAs de forma que puedan ser fácilmente refinadas iteración tras iteración de un ciclo de vida de líneas de producto ágiles. Dicha solución se ha implementado en una herramienta llamada FPLA (Flexible Product-Line Architecture) y ha sido validada mediante su aplicación en un proyecto de desarrollo de un sistema de gestión de medición en redes de energía eléctrica. Dicho proyecto ha sido desarrollado en una fábrica de software global en colaboración con la Universidad Politécnica de Madrid e Indra Software Labs.
Resumo:
The main objective of this article is to focus on the analysis of teaching techniques, ranging from the use of the blackboard and chalk in old traditional classes, using slides and overhead projectors in the eighties and use of presentation software in the nineties, to the video, electronic board and network resources nowadays. Furthermore, all the aforementioned, is viewed under the different mentalities in which the teacher conditions the student using the new teaching technique, improving soft skills but maybe leading either to encouragement or disinterest, and including the lack of educational knowledge consolidation at scientific, technology and specific levels. In the same way, we study the process of adaptation required for teachers, the differences in the processes of information transfer and education towards the student, and even the existence of teachers who are not any longer appealed by their work due which has become much simpler due to new technologies and the greater ease in the development of classes due to the criteria described on the new Grade Programs adopted by the European Higher Education Area. Moreover, it is also intended to understand the evolution of students’ profiles, from the eighties to present time, in order to understand certain attitudes, behaviours, accomplishments and acknowledgements acquired over the semesters within the degree Programs. As an Educational Innovation Group, another key question also arises. What will be the learning techniques in the future?. How these evolving matters will affect both positively and negatively on the mentality, attitude, behaviour, learning, achievement of goals and satisfaction levels of all elements involved in universities’ education? Clearly, this evolution from chalk to the electronic board, the three-dimensional view of our works and their sequence, greatly facilitates the understanding and adaptation later on to the business world, but does not answer to the unknowns regarding the knowledge and the full development of achievement’s indicators in basic skills of a degree. This is the underlying question which steers the roots of the presented research.
Resumo:
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Resumo:
New concepts in air navigation have been introduced recently. Among others, are the concepts of trajectory optimization, 4D trajectories, RBT (Reference Business Trajectory), TBO (trajectory based operations), CDA (Continuous Descent Approach) and ACDA (Advanced CDA), conflict resolution, arrival time (AMAN), introduction of new aircraft (UAVs, UASs) in air space, etc. Although some of these concepts are new, the future Air Traffic Management will maintain the four ATM key performance areas such as Safety, Capacity, Efficiency, and Environmental impact. So much, the performance of the ATM system is directly related to the accuracy with which the future evolution of the traffic can be predicted. In this sense, future air traffic management will require a variety of support tools to provide suitable help to users and engineers involved in the air space management. Most of these tools are based on an appropriate trajectory prediction module as main component. Therefore, the purposes of these tools are related with testing and evaluation of any air navigation concept before they become fully operative. The aim of this paper is to provide an overview to the design of a software tool useful to estimate aircraft trajectories adapted to air navigation concepts. Other usage of the tool, like controller design, vertical navigation assessment, procedures validation and hardware and software in the loop are available in the software tool. The paper will show the process followed to design the tool, the software modules needed to perform accurately and the process followed to validate the output data.
Resumo:
The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.
Resumo:
En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.
Resumo:
La capacidad de transporte es uno de los baremos fundamentales para evaluar la progresión que puede llegar a tener un área económica y social. Es un sector de elevada importancia para la sociedad actual. Englobado en los distintos tipos de transporte, uno de los medios de transporte que se encuentra más en alza en la actualidad, es el ferroviario. Tanto para movilidad de pasajeros como para mercancías, el tren se ha convertido en un medio de transporte muy útil. Se encuentra dentro de las ciudades, entre ciudades con un radio pequeño entre ellas e incluso cada vez más, gracias a la alta velocidad, entre ciudades con gran distancia entre ellas. Esta Tesis pretende ayudar en el diseño de una de las etapas más importantes de los Proyectos de instalación de un sistema ferroviario: el sistema eléctrico de tracción. La fase de diseño de un sistema eléctrico de tracción ferroviaria se enfrenta a muchas dudas que deben ser resueltas con precisión. Del éxito de esta fase dependerá la capacidad de afrontar las demandas de energía de la explotación ferroviaria. También se debe atender a los costes de instalación y de operación, tanto costes directos como indirectos. Con la Metodología que se presenta en esta Tesis se ofrecerá al diseñador la opción de manejar un sistema experto que como soluciones le plantee un conjunto de escenarios de sistemas eléctricos correctos, comprobados por resolución de modelos de ecuaciones. Correctos desde el punto de vista de validez de distintos parámetros eléctrico, como de costes presupuestarios e impacto de costes indirectos. Por tanto, el diseñador al haber hecho uso de esta Metodología, tendría en un espacio de tiempo relativamente corto, un conjunto de soluciones factibles con las que poder elegir cuál convendría más según sus intereses finales. Esta Tesis se ha desarrollado en una vía de investigación integrada dentro del Centro de Investigaciones Ferroviarias CITEF-UPM. Entre otros proyectos y vías de investigación, en CITEF se ha venido trabajando en estudios de validación y dimensionamiento de sistemas eléctricos ferroviarios con diversos y variados clientes y sistemas ferroviarios. A lo largo de los proyectos realizados, el interés siempre ha girado mayoritariamente sobre los siguientes parámetros del sistema eléctrico: - Calcular número y posición de subestaciones de tracción. Potencia de cada subestación. - Tipo de catenaria a lo largo del recorrido. Conductores que componen la catenaria. Características. - Calcular número y posición de autotransformadores para sistemas funcionando en alterna bitensión o 2x25kV. - Posición Zonas Neutras. - Validación según normativa de: o Caídas de tensión en la línea o Tensiones máximas en el retorno de la línea o Sobrecalentamiento de conductores o Sobrecalentamiento de los transformadores de las subestaciones de tracción La idea es que las soluciones aportadas por la Metodología sugieran escenarios donde de estos parámetros estén dentro de los límites que marca la normativa. Tener la posibilidad de tener un repositorio de posibles escenarios donde los parámetros y elementos eléctricos estén calculados como correctos, aporta un avance en tiempos y en pruebas, que mejoraría ostensiblemente el proceso habitual de diseño para los sistemas eléctricos ferroviarios. Los costes directos referidos a elementos como subestaciones de tracción, autotransformadores, zonas neutras, ocupan un gran volumen dentro del presupuesto de un sistema ferroviario. En esta Tesis se ha querido profundizar también en el efecto de los costes indirectos provocados en la instalación y operación de sistemas eléctricos. Aquellos derivados del impacto medioambiental, los costes que se generan al mantener los equipos eléctricos y la instalación de la catenaria, los costes que implican la conexión entre las subestaciones de tracción con la red general o de distribución y por último, los costes de instalación propios de cada elemento compondrían los costes indirectos que, según experiencia, se han pensado relevantes para ejercer un cierto control sobre ellos. La Metodología cubrirá la posibilidad de que los diseños eléctricos propuestos tengan en cuenta variaciones de coste inasumibles o directamente, proponer en igualdad de condiciones de parámetros eléctricos, los más baratos en función de los costes comentados. Analizando los costes directos e indirectos, se ha pensado dividir su impacto entre los que se computan en la instalación y los que suceden posteriormente, durante la operación de la línea ferroviaria. Estos costes normalmente suelen ser contrapuestos, cuánto mejor es uno peor suele ser el otro y viceversa, por lo que hace falta un sistema que trate ambos objetivos por separado. Para conseguir los objetivos comentados, se ha construido la Metodología sobre tres pilares básicos: - Simulador ferroviario Hamlet: Este simulador integra módulos para construir esquemas de vías ferroviarios completos; módulo de simulación mecánica y de la tracción de material rodante; módulo de señalización ferroviaria; módulo de sistema eléctrico. Software realizado en C++ y Matlab. - Análisis y estudio de cómo focalizar los distintos posibles escenarios eléctricos, para que puedan ser examinados rápidamente. Pico de demanda máxima de potencia por el tráfico ferroviario. - Algoritmos de optimización: A partir de un estudio de los posibles algoritmos adaptables a un sistema tan complejo como el que se plantea, se decidió que los algoritmos genéticos serían los elegidos. Se han escogido 3 algoritmos genéticos, permitiendo recabar información acerca del comportamiento y resultados de cada uno de ellos. Los elegidos por motivos de tiempos de respuesta, multiobjetividad, facilidad de adaptación y buena y amplia aplicación en proyectos de ingeniería fueron: NSGA-II, AMGA-II y ɛ-MOEA. - Diseño de funciones y modelo preparado para trabajar con los costes directos e indirectos y las restricciones básicas que los escenarios eléctricos no deberían violar. Estas restricciones vigilan el comportamiento eléctrico y la estabilidad presupuestaria. Las pruebas realizadas utilizando el sistema han tratado o bien de copiar situaciones que se puedan dar en la realidad o directamente sistemas y problemas reales. Esto ha proporcionado además de la posibilidad de validar la Metodología, también se ha posibilitado la comparación entre los algoritmos genéticos, comparar sistemas eléctricos escogidos con los reales y llegar a conclusiones muy satisfactorias. La Metodología sugiere una vía de trabajo muy interesante, tanto por los resultados ya obtenidos como por las oportunidades que puede llegar a crear con la evolución de la misma. Esta Tesis se ha desarrollado con esta idea, por lo que se espera pueda servir como otro factor para trabajar con la validación y diseño de sistemas eléctricos ferroviarios. ABSTRACT Transport capacity is one of the critical points to evaluate the progress than a specific social and economical area is able to reach. This is a sector of high significance for the actual society. Included inside the most common types of transport, one of the means of transport which is elevating its use nowadays is the railway. Such as for passenger transport of weight movements, the train is being consolidated like a very useful mean of transport. Railways are installed in many geography areas. Everyone know train in cities, or connecting cities inside a surrounding area or even more often, taking into account the high-speed, there are railways infrastructure between cities separated with a long distance. This Ph.D work aims to help in the process to design one of the most essential steps in Installation Projects belonging to a railway system: Power Supply System. Design step of the railway power supply, usually confronts to several doubts and uncertainties, which must be solved with high accuracy. Capacity to supply power to the railway traffic depends on the success of this step. On the other hand is very important to manage the direct and indirect costs derived from Installation and Operation. With the Methodology is presented in this Thesis, it will be offered to the designer the possibility to handle an expert system that finally will fill a set of possible solutions. These solutions must be ready to work properly in the railway system, and they were tested using complex equation models. This Thesis has been developed through a research way, integrated inside Citef (Railway Research Centre of Technical University of Madrid). Among other projects and research ways, in Citef has been working in several validation studies and dimensioning of railway power supplies. It is been working by a large range of clients and railways systems. Along the accomplished Projects, the main goal has been rounded mostly about the next list of parameters of the electrical system: - Calculating number and location of traction substations. Power of each substation. - Type of Overhead contact line or catenary through the railway line. The wires which set up the catenary. Main Characteristics. - Calculating number and position of autotransformers for systems working in alternating current bi-voltage of called 2x25 kV. - Location of Neutral Zones. - Validating upon regulation of: o Drop voltages along the line o Maximum return voltages in the line o Overheating/overcurrent of the wires of the catenary o Avoiding overheating in the transformers of the traction substations. Main objective is that the solutions given by the Methodology, could be suggest scenarios where all of these parameters from above, would be between the limits established in the regulation. Having the choice to achieve a repository of possible good scenarios, where the parameters and electrical elements will be assigned like ready to work, that gives a great advance in terms of times and avoiding several tests. All of this would improve evidently the regular railway electrical systems process design. Direct costs referred to elements like traction substations, autotransformers, neutral zones, usually take up a great volume inside the general budget in railway systems. In this Thesis has been thought to bear in mind another kind of costs related to railway systems, also called indirect costs. These could be enveloped by those enmarked during installation and operation of electrical systems. Those derived from environmental impact; costs generated during the maintenance of the electrical elements and catenary; costs involved in the connection between traction substations and general electric grid; finally costs linked with the own installation of the whole electrical elements needed for the correct performance of the railway system. These are integrated inside the set has been collected taking into account own experience and research works. They are relevant to be controlled for our Methodology, just in case for the designers of this type of systems. The Methodology will cover the possibility that the final proposed power supply systems will be hold non-acceptable variations of costs, comparing with initial expected budgets, or directly assuming a threshold of budget for electrical elements in actual scenario, and achieving the cheapest in terms of commented costs from above. Analyzing direct and indirect costs, has been thought to divide their impact between two main categories. First one will be inside the Installation and the other category will comply with the costs often happens during Railway Operation time. These costs normally are opposed, that means when one is better the other turn into worse, in costs meaning. For this reason is necessary treating both objectives separately, in order to evaluate correctly the impact of each one into the final system. The objectives detailed before build the Methodology under three basic pillars: - Railway simulator Hamlet: This software has modules to configure many railway type of lines; mechanical and traction module to simulate the movement of rolling stock; signaling module; power supply module. This software has been developed using C++ and Matlab R13a - Previously has been mandatory to study how would be possible to work properly with a great number of feasible electrical systems. The target comprised the quick examination of these set of scenarios in terms of time. This point is talking about Maximum power demand peaks by railway operation plans. - Optimization algorithms. A railway infrastructure is a very complex system. At the beginning it was necessary to search about techniques and optimization algorithms, which could be adaptable to this complex system. Finally three genetic multiobjective algorithms were the chosen. Final decision was taken attending to reasons such as time complexity, able to multiobjective, easy to integrate in our problem and with a large application in engineering tasks. They are: NSGA-II, AMGA-II and ɛ-MOEA. - Designing objectives functions and equation model ready to work with the direct and indirect costs. The basic restrictions are not able to avoid, like budgetary or electrical, connected hardly with the recommended performance of elements, catenary and safety in a electrical railway systems. The battery of tests launched to the Methodology has been designed to be as real as possible. In fact, due to our work in Citef and with real Projects, has been integrated and configured three real railway lines, in order to evaluate correctly the final results collected by the Methodology. Another topic of our tests has been the comparison between the performances of the three algorithms chosen. Final step has been the comparison again with different possible good solutions, it means power supply system designs, provided by the Methodology, testing the validity of them. Once this work has been finished, the conclusions have been very satisfactory. Therefore this Thesis suggest a very interesting way of research and work, in terms of the results obtained and for the future opportunities can be created with the evolution of this. This Thesis has been developed with this idea in mind, so is expected this work could adhere another factor to work in the difficult task of validation and design of railway power supply systems.
Resumo:
An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.