12 resultados para Cyberspace Situational Knowledge, Capability, Cybersecurity, Cyberdefence, Organization

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente documento tiene como objetivo general desarrollar un plan de negocio para analizar la viabilidad de la creacin de una nueva empresa, MyTested S.L.. Pretende ofrecer una herramienta para que las personas puedan comunicar a sus familiares el acceso a sus cuentas digitales una vez fallecidos. En cuanto a cmo surge la idea, fue a travs de una noticia que trataba sobre el derecho al olvido en las redes sociales y en internet. Investigando un poco y prestando atencin a los movimientos de las grandes empresas de internet, mi compaero/socio y yo, nos dimos cuenta de que ofrecer este servicio podra valer como negocio, ya que no existe mucha competencia en el mercado. Gracias a eso, nos planteamos en ms de una ocasin la posibilidad de montar nuestro propio negocio, de forma que pudiramos utilizar los conocimientos adquiridos en la universidad como base para crear la herramienta web. Escogimos empezar el proyecto utilizndolo como materia para el trabajo de fin de grado porque nos aporta dos valores muy importantes, ayuda de la comunidad de profesores existentes en la UPM, siendo una persona de gran aporte nuestro tutor Oscar Corcho y tambin, porque como estamos dedicando todo el tiempo a este proyecto, tener una fecha lmite para presentar tanto la parte de modelo de negocio como la parte de desarrollo en una fecha concreta, nos ayuda a planificar y mantener una presin constante sobre el proyecto y as forzar a no abandonarlo ni prolongarlo. Con ello, nos encontramos con dos grupos de dificultades, la escasa formacin a nivel empresarial y creacin de modelos de negocio y en el mbito del desarrollo al desconocimiento de tecnologas y APIs de las redes sociales. Al tratarse de una herramienta Web, parte de unos costes muy bajos como el alojamiento del servidor o la contratacin temporal de comerciales para publicitar la herramienta entre funerarias y hospitales. Estos factores positivos benefician tanto la realizacin del proyecto como su avance. Como ya se puede intuir de la lectura del prrafo anterior, el servicio que ofrece la herramienta MyTested S.L. est relacionado con el segmento testamental de una persona fallecida, podramos definirlo como testamento digital. Actualmente, vivimos en un mundo que se centra cada vez ms en la parte digital y es por ello, que en un futuro cercano, todas las cuentas que creamos en internet tendrn que ser cerradas o bloqueadas cuando caen en el desuso por el fallecimiento del propietario, es en ese hueco donde podemos situarnos, ofreciendo una herramienta para poder trasladar la informacin necesaria a las personas elegidas por el cliente para que puedan cerrar o bloquear sus cuentas digitales. Consideramos que existe una interesante oportunidad debido a la escasez de oferta de este tipo de servicios en Espaa y a nivel mundial. En Abril de 2015 hay inscritos en el registro Nacional de ltimas voluntades 185.6651 personas por lo que encontramos que un 0,397%1 de las personas en Espaa ha registrado su testamento. El gasto medio al hacer un testamento vital ante notario de tus bienes tiene un coste de 40 a 80 euros2 , este es el principal motivo por el que la mayora de espaoles no realiza su testamento antes de morir. Con este dato obtenemos dos lecciones, lo que la herramienta ofrece no es el documento notarial de los bienes del cliente, sino la sistema, puedan bloquear o cerrar sus cuentas. La segunda leccin que obtenemos es que el precio tiene que ser muy reducido para poder llegar a un gran nmero de personas, aadiendo tambin el criterio de que el cliente podr actualizar su informacin, ya que la informacin digital es muy fcil de cambiar y frecuente. Como se podr leer en al apartado dedicado a nuestra visin, misin y valores, aunque estamos convencidos que se puede extraer de la lectura de cualquier parte de este documento, todos nuestros objetivos los queremos conseguir no slo buscando un enfoque empresarial a nuestro da a da, sino convirtiendo nuestra responsabilidad social sincera, en uno de los retos que ms nos ilusionan, fomentando para ello, aspectos como el desarrollo web, estudios de mercado, conocimiento de las necesidades de la poblacin, nuevas tecnologas y negocio. En general, los objetivos que se pretenden cumplir con este estudio son: - Conocer los pasos para crear una empresa - Desarrollar un documento de plan de negocio que contenga lo siguiente: - Anlisis de mercado - Definicin de productos y/o servicios - Plan de publicidad y expansin (marketing) - Plan financiero - Capacidad para definir los requisitos de una aplicacin Web. - Capacidad de elegir la tecnologa idnea y actual de un sistema Web. - Conocer el funcionamiento de una empresa y cmo comunicarse con las herramientas gubernamentales. - Comprobar si las posibilidades que nos ofrece el entorno son las adecuadas para nuestras actividades. - Estudio, anlisis de la competencia - Definir los diferentes perfiles de cliente para nuestro negocio. - Analizar la viabilidad de nuestro modelo de negocio. Para ello, comenzamos realizando una definicin de las caractersticas generales del proyecto, detallando cules son las motivaciones que han hecho a los emprendedores embarcarse en el mismo, qu servicios ofreceremos a nuestros clientes, el porqu de la eleccin del sector, as como nuestra misin, visin y valores.---ABSTRACT---The goal and aim of the present document is to develop a business plan in order to analyze the viability of build a new enterprise, that we will name MyTested S.L., it wants offer a tool for sharing and to facilitate to the relatives of a dead person the access to the digital accounts. Talking about how come up the idea, it was once a have read a news over the right of forgotten throughout social nets and inside internet, researching a little and paying attention to the different movements of the biggest internet companies, my peer ( and partner) and I were aware that to offer this service might be a good business, because does not exist many competitors on this kind of market service, mainly thanks of that, we have planned on several times the possibility to build our owner company, in the way to use the capabilities that we achieved in the University as based to develop and make a web tool. We choose begin this project as subject of our Final Project Degree after analyze the positive and negative point of views: The positive was because it has two main values, firstly the support of the current teachers UPM community, specially our fellow Oscar Corcho and also because we cant dedicate all our time to this project, so to have a deadline to present either the business model as the develop on time, help us to plan and remain a constant pressure over the project and neither drop out it or extend it more that the necessary. As a web tool, neither the hosting of maintenance or for sort out a net of temporary commercials for visiting hospitals or undertakers or insurances, the cost dont are expensive In the negative side, however, we found twice some main difficulties, the few training as entrepreneur level and how to build a business model and on the other hand the lack of awareness of the technologies and apps of the social net software as well. As summary, these positives facts enhance to work project out and also to develop it. As we could understand reading on the latest paragraph, the service that will do MyTested tool is relation with the testament issue of a dead person, we might call as a digital testament. Currently we are living in world which is focus further on the digital life, for that in a near future every internet accounts should be closed or locked whenever arent used by the dead of the owner, this is a market niche (never better said) where we can lead, offering a tool that might transfer the necessary information to the chosen persons by the client in order to allow either close or lock his digital accounts. We are considering that there are interesting opportunities due to the few offers of that kind of service in Spain and at global level. In April 2015 there were 185.6651 persons registered in the Official National last will and testament, this figure mean that the 0,397%1 of the Spaniards citizens have registered their testaments. The average cost of doing the testament of your assets with a Notary is since 40 up to 802, this is one of the principal motives because the majority of the Spaniards dont do it before dead. With these data we might get two lessons, we are not talking about an official notary testament at all, it is only for close or lock the digital accounts by the chosen person by the dead client, and the latest lesson, but not least, the price of our service must be very cheaper in order to achieve touch an important amount of people, knowing also that he client will be able update the information filled, considering that this kind of information is very easy to change and update frequently. As we can read on the stage dedicated to our Vision, Mission and Values, although we are persuaded that can be read in everywhere of this document as well, our aim doesnt be an business focus on day to day, is also to become our honest social responsibility in this challenge, that is our mainly eagerness, enhancing some aspects as the web development, market research and the knowledge of the population needs, new technologies and new market opportunities. In general the goals that we would like get within this project are: - Achieve the knowledge needs for be an entrepreneurial, and find out the steps for star a business - Achieve market research skills - Products and services definitions - Advertising plan and growth (marketing plan) - Financial plan knowledge - Capability of a web design requirements - Capability for choose the best and actual technology for web design - Knowledge over how work out inside a company and how communicate with official tools. - Check whether the possibilities of the environment are the adequate for our activities. - Research and competitiveness analysis - Define the different profiles of the target for our business. - Analyze the viability of the business model For all that, we began with a definition of the general features of the project, detailing which are the motivations those done to the entrepreneurial get on board, which kind of service we offered to the client, also why the selection of the market sector and our mission, vision and value as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows the development of a science-technological knowledge transfer model in Mexico, as a means to boost the limited relations between the scientific and industrial environments. The proposal is based on the analysis of eight organizations (research centers and firms) with varying degrees of skill in the practice of science-technological knowledge transfer, and carried out by the case study approach. The analysis highlights the synergistic use of the organizational and technological capabilities of each organization, as a means to identification of the knowledge transfer mechanisms best suited to enabling the establishment of cooperative processes, and achieve the R&D and innovation activities results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From its creation, Spanish Young Generation in Nuclear (Jvenes Nucleares, JJNN), a non-profit organization that depends on the Spanish Nuclear Society (SNE), has as an important scope to help transferring the knowledge between those generations in the way that it can be possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is growing concern over the challenges for innovation in Freight Pipeline industry. Since the early works of Chesbrough a decade ago, we have learned a lot about the content, context and process of open innovation. However, much more research is needed in Freight Pipeline Industry. The reality is that few corporations have institutionalized open innovation practices in ways that have enabled substantial growth or industry leadership. Based on this, we pursue the following question: How does a firms integration into knowledge networks depend on its ability to manage knowledge? A competence-based model for freight pipeline organizations is analysed, this model should be understood by any organization in order to be successful in motivating professionals who carry out innovations and play a main role in collaborative knowledge creation processes. This paper aims to explain how can open innovation achieve its potential in most Freight Pipeline Industries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cuando una colectividad de sistemas dinmicos acoplados mediante una estructura irregular de interacciones evoluciona, se observan dinmicas de gran complejidad y fenmenos emergentes imposibles de predecir a partir de las propiedades de los sistemas individuales. El objetivo principal de esta tesis es precisamente avanzar en nuestra comprensin de la relacin existente entre la topologa de interacciones y las dinmicas colectivas que una red compleja es capaz de mantener. Siendo este un tema amplio que se puede abordar desde distintos puntos de vista, en esta tesis se han estudiado tres problemas importantes dentro del mismo que estn relacionados entre s. Por un lado, en numerosos sistemas naturales y artificiales que se pueden describir mediante una red compleja la topologa no es esttica, sino que depende de la dinmica que se desarrolla en la red: un ejemplo son las redes de neuronas del cerebro. En estas redes adaptativas la propia topologa emerge como consecuencia de una autoorganizacin del sistema. Para conocer mejor cmo pueden emerger espontneamente las propiedades comnmente observadas en redes reales, hemos estudiado el comportamiento de sistemas que evolucionan segn reglas adaptativas locales con base emprica. Nuestros resultados numricos y analticos muestran que la autoorganizacin del sistema da lugar a dos de las propiedades ms universales de las redes complejas: a escala mesoscpica, la aparicin de una estructura de comunidades, y, a escala macroscpica, la existencia de una ley de potencias en la distribucin de las interacciones en la red. El hecho de que estas propiedades aparecen en dos modelos con leyes de evolucin cuantitativamente distintas que siguen unos mismos principios adaptativos sugiere que estamos ante un fenmeno que puede ser muy general, y estar en el origen de estas propiedades en sistemas reales. En segundo lugar, proponemos una medida que permite clasificar los elementos de una red compleja en funcin de su relevancia para el mantenimiento de dinmicas colectivas. En concreto, estudiamos la vulnerabilidad de los distintos elementos de una red frente a perturbaciones o grandes fluctuaciones, entendida como una medida del impacto que estos acontecimientos externos tienen en la interrupcin de una dinmica colectiva. Los resultados que se obtienen indican que la vulnerabilidad dinmica es sobre todo dependiente de propiedades locales, por tanto nuestras conclusiones abarcan diferentes topologas, y muestran la existencia de una dependencia no trivial entre la vulnerabilidad y la conectividad de los elementos de una red. Finalmente, proponemos una estrategia de imposicin de una dinmica objetivo genrica en una red dada e investigamos su validez en redes con diversas topologas que mantienen regmenes dinmicos turbulentos. Se obtiene como resultado que las redes heterogneas (y la amplia mayora de las redes reales estudiadas lo son) son las ms adecuadas para nuestra estrategia de targeting de dinmicas deseadas, siendo la estrategia muy efectiva incluso en caso de disponer de un conocimiento muy imperfecto de la topologa de la red. Aparte de la relevancia terica para la comprensin de fenmenos colectivos en sistemas complejos, los mtodos y resultados propuestos podrn dar lugar a aplicaciones en sistemas experimentales y tecnolgicos, como por ejemplo los sistemas neuronales in vitro, el sistema nervioso central (en el estudio de actividades sncronas de carcter patolgico), las redes elctricas o los sistemas de comunicaciones. ABSTRACT The time evolution of an ensemble of dynamical systems coupled through an irregular interaction scheme gives rise to dynamics of great of complexity and emergent phenomena that cannot be predicted from the properties of the individual systems. The main objective of this thesis is precisely to increase our understanding of the interplay between the interaction topology and the collective dynamics that a complex network can support. This is a very broad subject, so in this thesis we will limit ourselves to the study of three relevant problems that have strong connections among them. First, it is a well-known fact that in many natural and manmade systems that can be represented as complex networks the topology is not static; rather, it depends on the dynamics taking place on the network (as it happens, for instance, in the neuronal networks in the brain). In these adaptive networks the topology itself emerges from the self-organization in the system. To better understand how the properties that are commonly observed in real networks spontaneously emerge, we have studied the behavior of systems that evolve according to local adaptive rules that are empirically motivated. Our numerical and analytical results show that self-organization brings about two of the most universally found properties in complex networks: at the mesoscopic scale, the appearance of a community structure, and, at the macroscopic scale, the existence of a power law in the weight distribution of the network interactions. The fact that these properties show up in two models with quantitatively different mechanisms that follow the same general adaptive principles suggests that our results may be generalized to other systems as well, and they may be behind the origin of these properties in some real systems. We also propose a new measure that provides a ranking of the elements in a network in terms of their relevance for the maintenance of collective dynamics. Specifically, we study the vulnerability of the elements under perturbations or large fluctuations, interpreted as a measure of the impact these external events have on the disruption of collective motion. Our results suggest that the dynamic vulnerability measure depends largely on local properties (our conclusions thus being valid for different topologies) and they show a non-trivial dependence of the vulnerability on the connectivity of the network elements. Finally, we propose a strategy for the imposition of generic goal dynamics on a given network, and we explore its performance in networks with different topologies that support turbulent dynamical regimes. It turns out that heterogeneous networks (and most real networks that have been studied belong in this category) are the most suitable for our strategy for the targeting of desired dynamics, the strategy being very effective even when the knowledge on the network topology is far from accurate. Aside from their theoretical relevance for the understanding of collective phenomena in complex systems, the methods and results here discussed might lead to applications in experimental and technological systems, such as in vitro neuronal systems, the central nervous system (where pathological synchronous activity sometimes occurs), communication systems or power grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, neuropsychological theories tend to consider cognitive functions as a result of the whole brainwork and not as individual local areas of its cortex. Studies based on neuroimaging techniques have increased in the last years, promoting an exponential growth of the body of knowledge about relations between cognitive functions and brain structures [1]. However, so fast evolution make complicated to integrate them in verifiable theories and, even more, translated in to cognitive rehabilitation. The aim of this research work is to develop a cognitive process-modeling tool. The purpose of this system is, in the first term, to represent multidimensional data, from structural and functional connectivity, neuroimaging, data from lesion studies and derived data from clinical intervention [2][3]. This will allow to identify consolidated knowledge, hypothesis, experimental designs, new data from ongoing studies and emerging results from clinical interventions. In the second term, we pursuit to use Artificial Intelligence to assist in decision making allowing to advance towards evidence based and personalized treatments in cognitive rehabilitation. This work presents the knowledge base design of the knowledge representation tool. It is compound of two different taxonomies (structure and function) and a set of tags linking both taxonomies at different levels of structural and functional organization. The remainder of the abstract is organized as follows: Section 2 presents the web application used for gathering necessary information for generating the knowledge base, Section 3 describes knowledge base structure and finally Section 4 expounds reached conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to the PMBOK (Project Management Body of Knowledge), project management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements [1]. Project Management has proven to be one of the most important disciplines at the moment of determining the success of any project [2][3][4]. Given that many of the activities covered by this discipline can be said that are horizontal for any kind of domain, the importance of acknowledge the concepts and practices becomes even more obvious. The specific case of the projects that fall in the domain of Software Engineering are not the exception about the great influence of Project Management for their success. The critical role that this discipline plays in the industry has come to numbers. A report by McKinsey & Co [4] shows that the establishment of programs for the teaching of critical skills of project management can improve the performance of the project in time and costs. As an example of the above, the reports exposes: One defense organization used these programs to train several waves of project managers and leaders who together administered a portfolio of more than 1,000 capital projects ranging in Project management size from $100,000 to $500 million. Managers who successfully completed the training were able to cut costs on most projects by between 20 and 35 percent. Over time, the organization expects savings of about 15 percent of its entire baseline spending. In a white paper by the PMI (Project Management Institute) about the value of project management [5], it is stated that: Leading organizations across sectors and geographic borders have been steadily embracing project management as a way to control spending and improve project results. According to the research made by the PMI for the paper, after the economical crisis Executives discovered that adhering to project management methods and strategies reduced risks, cut costs and improved success ratesall vital to surviving the economic crisis. In every elite company, a proper execution of the project management discipline has become a must. Several members of the software industry have putted effort into achieving ways of assuring high quality results from projects; many standards, best practices, methodologies and other resources have been produced by experts from different fields of expertise. In the industry and the academic community, there is a continuous research on how to teach better software engineering together with project management [4][6]. For the general practices of Project Management the PMI produced a guide of the required knowledge that any project manager should have in their toolbox to lead any kind of project, this guide is called the PMBOK. On the side of best practices 10 and required knowledge for the Software Engineering discipline, the IEEE (Institute of Electrical and Electronics Engineers) developed the SWEBOK (Software Engineering Body of Knowledge) in collaboration with software industry experts and academic researchers, introducing into the guide many of the needed knowledge for a 5-year expertise software engineer [7]. The SWEBOK also covers management from the perspective of a software project. This thesis is developed to provide guidance to practitioners and members of the academic community about project management applied to software engineering. The way used in this thesis to get useful information for practitioners is to take an industry-approved guide for software engineering professionals such as the SWEBOK, and compare the content to what is found in the PMBOK. After comparing the contents of the SWEBOK and the PMBOK, what is found missing in the SWEBOK is used to give recommendations on how to enrich project management skills for a software engineering professional. Recommendations for members of the academic community on the other hand, are given taking into account the GSwE2009 (Graduated Software Engineering 2009) standard [8]. GSwE2009 is often used as a main reference for software engineering master programs [9]. The standard is mostly based on the content of the SWEBOK, plus some contents that are considered to reinforce the education of software engineering. Given the similarities between the SWEBOK and the GSwE2009, the results of comparing SWEBOK and PMBOK are also considered valid to enrich what the GSwE2009 proposes. So in the end the recommendations for practitioners end up being also useful for the academic community and their strategies to teach project management in the context of software engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis aborda metodologas para el clculo de riesgo de colisin de satlites. La minimizacin del riesgo de colisin se debe abordar desde dos puntos de vista distintos. Desde el punto de vista operacional, es necesario filtrar los objetos que pueden presentar un encuentro entre todos los objetos que comparten el espacio con un satlite operacional. Puesto que las rbitas, del objeto operacional y del objeto envuelto en la colisin, no se conocen perfectamente, la geometra del encuentro y el riesgo de colisin deben ser evaluados. De acuerdo con dicha geometra o riesgo, una maniobra evasiva puede ser necesaria para evitar la colisin. Dichas maniobras implican un consumo de combustible que impacta en la capacidad de mantenimiento orbital y por tanto de la visa til del satlite. Por tanto, el combustible necesario a lo largo de la vida til de un satlite debe ser estimado en fase de diseo de la misin para una correcta definicin de su vida til, especialmente para satlites orbitando en regmenes orbitales muy poblados. Los dos aspectos, diseo de misin y aspectos operacionales en relacin con el riesgo de colisin estn abordados en esta tesis y se resumen en la Figura 3. En relacin con los aspectos relacionados con el diseo de misin (parte inferior de la figura), es necesario evaluar estadsticamente las caractersticas de de la poblacin espacial y las teoras que permiten calcular el nmero medio de eventos encontrados por una misin y su capacidad de reducir riesgo de colisin. Estos dos aspectos definen los procedimientos ms apropiados para reducir el riesgo de colisin en fase operacional. Este aspecto es abordado, comenzando por la teora descrita en [Snchez-Ortiz, 2006]T.14 e implementada por el autor de esta tesis en la herramienta ARES [Snchez-Ortiz, 2004b]T.15 proporcionada por ESA para la evaluacin de estrategias de evitacin de colisin. Esta teora es extendida en esta tesis para considerar las caractersticas de los datos orbitales disponibles en las fases operacionales de un satlite (seccin 4.3.3). Adems, esta teora se ha extendido para considerar riesgo mximo de colisin cuando la incertidumbre de las rbitas de objetos catalogados no es conocida (como se da el caso para los TLE), y en el caso de querer slo considerar riesgo de colisin catastrfico (seccin 4.3.2.3). Dichas mejoras se han incluido en la nueva versin de ARES [Domnguez-Gonzlez and Snchez-Ortiz, 2012b]T.12 puesta a disposicin a travs de [SDUP,2014]R.60. En fase operacional, los catlogos que proporcionan datos orbitales de los objetos espaciales, son procesados rutinariamente, para identificar posibles encuentros que se analizan en base a algoritmos de clculo de riesgo de colisin para proponer maniobras de evasin. Actualmente existe una nica fuente de datos pblicos, el catlogo TLE (de sus siglas en ingls, Two Line Elements). Adems, el Joint Space Operation Center (JSpOC) Americano proporciona mensajes con alertas de colisin (CSM) cuando el sistema de vigilancia americano identifica un posible encuentro. En funcin de los datos usados en fase operacional (TLE o CSM), la estrategia de evitacin puede ser diferente debido a las caractersticas de dicha informacin. Es preciso conocer las principales caractersticas de los datos disponibles (respecto a la precisin de los datos orbitales) para estimar los posibles eventos de colisin encontrados por un satlite a lo largo de su vida til. En caso de los TLE, cuya precisin orbital no es proporcionada, la informacin de precisin orbital derivada de un anlisis estadstico se puede usar tambin en el proceso operacional as como en el diseo de la misin. En caso de utilizar CSM como base de las operaciones de evitacin de colisiones, se conoce la precisin orbital de los dos objetos involucrados. Estas caractersticas se han analizado en detalle, evaluando estadsticamente las caractersticas de ambos tipos de datos. Una vez concluido dicho anlisis, se ha analizado el impacto de utilizar TLE o CSM en las operaciones del satlite (seccin 5.1). Este anlisis se ha publicado en una revista especializada [Snchez-Ortiz, 2015b]T.3. En dicho anlisis, se proporcionan recomendaciones para distintas misiones (tamao del satlite y rgimen orbital) en relacin con las estrategias de evitacin de colisin para reducir el riesgo de colisin de manera significativa. Por ejemplo, en el caso de un satlite en rbita heliosncrona en rgimen orbital LEO, el valor tpico del ACPL que se usa de manera extendida es 10-4. Este valor no es adecuado cuando los esquemas de evitacin de colisin se realizan sobre datos TLE. En este caso, la capacidad de reduccin de riesgo es prcticamente nula (debido a las grandes incertidumbres de los datos TLE) incluso para tiempos cortos de prediccin. Para conseguir una reduccin significativa del riesgo, sera necesario usar un ACPL en torno a 10-6 o inferior, produciendo unas 10 alarmas al ao por satlite (considerando predicciones a un da) o 100 alarmas al ao (con predicciones a tres das). Por tanto, la principal conclusin es la falta de idoneidad de los datos TLE para el clculo de eventos de colisin. Al contrario, usando los datos CSM, debido a su mejor precisin orbital, se puede obtener una reduccin significativa del riesgo con ACPL en torno a 10-4 (considerando 3 das de prediccin). Incluso 5 das de prediccin pueden ser considerados con ACPL en torno a 10-5. Incluso tiempos de prediccin ms largos se pueden usar (7 das) con reduccin del 90% del riesgo y unas 5 alarmas al ao (en caso de predicciones de 5 das, el nmero de maniobras se mantiene en unas 2 al ao). La dinmica en GEO es diferente al caso LEO y hace que el crecimiento de las incertidumbres orbitales con el tiempo de propagacin sea menor. Por el contrario, las incertidumbres derivadas de la determinacin orbital son peores que en LEO por las diferencias en las capacidades de observacin de uno y otro rgimen orbital. Adems, se debe considerar que los tiempos de prediccin considerados para LEO pueden no ser apropiados para el caso de un satlite GEO (puesto que tiene un periodo orbital mayor). En este caso usando datos TLE, una reduccin significativa del riesgo slo se consigue con valores pequeos de ACPL, produciendo una alarma por ao cuando los eventos de colisin se predicen a un da vista (tiempo muy corto para implementar maniobras de evitacin de colisin).Valores ms adecuados de ACPL se encuentran entre 510-8 y 10-7, muy por debajo de los valores usados en las operaciones actuales de la mayora de las misiones GEO (de nuevo, no se recomienda en este rgimen orbital basar las estrategias de evitacin de colisin en TLE). Los datos CSM permiten una reduccin de riesgo apropiada con ACPL entre 10-5 y 10-4 con tiempos de prediccin cortos y medios (10-5 se recomienda para predicciones a 5 o 7 das). El nmero de maniobras realizadas sera una en 10 aos de misin. Se debe notar que estos clculos estn realizados para un satlite de unos 2 metros de radio. En el futuro, otros sistemas de vigilancia espacial (como el programa SSA de la ESA), proporcionarn catlogos adicionales de objetos espaciales con el objetivo de reducir el riesgo de colisin de los satlites. Para definir dichos sistemas de vigilancia, es necesario identificar las prestaciones del catalogo en funcin de la reduccin de riesgo que se pretende conseguir. Las caractersticas del catlogo que afectan principalmente a dicha capacidad son la cobertura (nmero de objetos incluidos en el catalogo, limitado principalmente por el tamao mnimo de los objetos en funcin de las limitaciones de los sensores utilizados) y la precisin de los datos orbitales (derivada de las prestaciones de los sensores en relacin con la precisin de las medidas y la capacidad de re-observacin de los objetos). El resultado de dicho anlisis (seccin 5.2) se ha publicado en una revista especializada [Snchez-Ortiz, 2015a]T.2. Este anlisis no estaba inicialmente previsto durante la tesis, y permite mostrar como la teora descrita en esta tesis, inicialmente definida para facilitar el diseo de misiones (parte superior de la figura 1) se ha extendido y se puede aplicar para otros propsitos como el dimensionado de un sistema de vigilancia espacial (parte inferior de la figura 1). La principal diferencia de los dos anlisis se basa en considerar las capacidades de catalogacin (precisin y tamao de objetos observados) como una variable a modificar en el caso de un diseo de un sistema de vigilancia), siendo fijas en el caso de un diseo de misin. En el caso de las salidas generadas en el anlisis, todos los aspectos calculados en un anlisis estadstico de riesgo de colisin son importantes para diseo de misin (con el objetivo de calcular la estrategia de evitacin y la cantidad de combustible a utilizar), mientras que en el caso de un diseo de un sistema de vigilancia, los aspectos ms importantes son el nmero de maniobras y falsas alarmas (fiabilidad del sistema) y la capacidad de reduccin de riesgo (efectividad del sistema). Adicionalmente, un sistema de vigilancia espacial debe ser caracterizado por su capacidad de evitar colisiones catastrficas (evitando as in incremento dramtico de la poblacin de basura espacial), mientras que el diseo de una misin debe considerar todo tipo de encuentros, puesto que un operador est interesado en evitar tanto las colisiones catastrficas como las letales. Del anlisis de las prestaciones (tamao de objetos a catalogar y precisin orbital) requeridas a un sistema de vigilancia espacial se concluye que ambos aspectos han de ser fijados de manera diferente para los distintos regmenes orbitales. En el caso de LEO se hace necesario observar objetos de hasta 5cm de radio, mientras que en GEO se rebaja este requisito hasta los 100 cm para cubrir las colisiones catastrficas. La razn principal para esta diferencia viene de las diferentes velocidades relativas entre los objetos en ambos regmenes orbitales. En relacin con la precisin orbital, sta ha de ser muy buena en LEO para poder reducir el nmero de falsas alarmas, mientras que en regmenes orbitales ms altos se pueden considerar precisiones medias. En relacin con los aspectos operaciones de la determinacin de riesgo de colisin, existen varios algoritmos de clculo de riesgo entre dos objetos espaciales. La Figura 2 proporciona un resumen de los casos en cuanto a algoritmos de clculo de riesgo de colisin y como se abordan en esta tesis. Normalmente se consideran objetos esfricos para simplificar el clculo de riesgo (caso A). Este caso est ampliamente abordado en la literatura y no se analiza en detalle en esta tesis. Un caso de ejemplo se proporciona en la seccin 4.2. Considerar la forma real de los objetos (caso B) permite calcular el riesgo de una manera ms precisa. Un nuevo algoritmo es definido en esta tesis para calcular el riesgo de colisin cuando al menos uno de los objetos se considera complejo (seccin 4.4.2). Dicho algoritmo permite calcular el riesgo de colisin para objetos formados por un conjunto de cajas, y se ha presentado en varias conferencias internacionales. Para evaluar las prestaciones de dicho algoritmo, sus resultados se han comparado con un anlisis de Monte Carlo que se ha definido para considerar colisiones entre cajas de manera adecuada (seccin 4.1.2.3), pues la bsqueda de colisiones simples aplicables para objetos esfricos no es aplicable a este caso. Este anlisis de Monte Carlo se considera la verdad a la hora de calcular los resultados del algoritmos, dicha comparativa se presenta en la seccin 4.4.4. En el caso de satlites que no se pueden considerar esfricos, el uso de un modelo de la geometra del satlite permite descartar eventos que no son colisiones reales o estimar con mayor precisin el riesgo asociado a un evento. El uso de estos algoritmos con geometras complejas es ms relevante para objetos de dimensiones grandes debido a las prestaciones de precisin orbital actuales. En el futuro, si los sistemas de vigilancia mejoran y las rbitas son conocidas con mayor precisin, la importancia de considerar la geometra real de los satlites ser cada vez ms relevante. La seccin 5.4 presenta un ejemplo para un sistema de grandes dimensiones (satlite con un tether). Adicionalmente, si los dos objetos involucrados en la colisin tienen velocidad relativa baja (y geometra simple, Caso C en la Figura 2), la mayor parte de los algoritmos no son aplicables requiriendo implementaciones dedicadas para este caso particular. En esta tesis, uno de estos algoritmos presentado en la literatura [Patera, 2001]R.26 se ha analizado para determinar su idoneidad en distintos tipos de eventos (seccin 4.5). La evaluacin frete a un anlisis de Monte Carlo se proporciona en la seccin 4.5.2. Tras este anlisis, se ha considerado adecuado para abordar las colisiones de baja velocidad. En particular, se ha concluido que el uso de algoritmos dedicados para baja velocidad son necesarios en funcin del tamao del volumen de colisin proyectado en el plano de encuentro (B-plane) y del tamao de la incertidumbre asociada al vector posicin entre los dos objetos. Para incertidumbres grandes, estos algoritmos se hacen ms necesarios pues la duracin del intervalo en que los elipsoides de error de los dos objetos pueden intersecar es mayor. Dicho algoritmo se ha probado integrando el algoritmo de colisin para objetos con geometras complejas. El resultado de dicho anlisis muestra que este algoritmo puede ser extendido fcilmente para considerar diferentes tipos de algoritmos de clculo de riesgo de colisin (seccin 4.5.3). Ambos algoritmos, junto con el mtodo Monte Carlo para geometras complejas, se han implementado en la herramienta operacional de la ESA CORAM, que es utilizada para evaluar el riesgo de colisin en las actividades rutinarias de los satlites operados por ESA [Snchez-Ortiz, 2013a]T.11. Este hecho muestra el inters y relevancia de los algoritmos desarrollados para la mejora de las operaciones de los satlites. Dichos algoritmos han sido presentados en varias conferencias internacionales [Snchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Snchez-Ortiz, 2015c]T.1. ABSTRACT This document addresses methodologies for computation of the collision risk of a satellite. Two different approaches need to be considered for collision risk minimisation. On an operational basis, it is needed to perform a sieve of possible objects approaching the satellite, among all objects sharing the space with an operational satellite. As the orbits of both, satellite and the eventual collider, are not perfectly known but only estimated, the miss-encounter geometry and the actual risk of collision shall be evaluated. In the basis of the encounter geometry or the risk, an eventual manoeuvre may be required to avoid the conjunction. Those manoeuvres will be associated to a reduction in the fuel for the mission orbit maintenance, and thus, may reduce the satellite operational lifetime. Thus, avoidance manoeuvre fuel budget shall be estimated, at mission design phase, for a better estimation of mission lifetime, especially for those satellites orbiting in very populated orbital regimes. These two aspects, mission design and operational collision risk aspects, are summarised in Figure 3, and covered along this thesis. Bottom part of the figure identifies the aspects to be consider for the mission design phase (statistical characterisation of the space object population data and theory computing the mean number of events and risk reduction capability) which will define the most appropriate collision avoidance approach at mission operational phase. This part is covered in this work by starting from the theory described in [Snchez-Ortiz, 2006]T.14 and implemented by this author in ARES tool [Snchez-Ortiz, 2004b]T.15 provided by ESA for evaluation of collision avoidance approaches. This methodology has been now extended to account for the particular features of the available data sets in operational environment (section 4.3.3). Additionally, the formulation has been extended to allow evaluating risk computation approached when orbital uncertainty is not available (like the TLE case) and when only catastrophic collisions are subject to study (section 4.3.2.3). These improvements to the theory have been included in the new version of ESA ARES tool [Domnguez-Gonzlez and Snchez-Ortiz, 2012b]T.12 and available through [SDUP,2014]R.60. At the operation phase, the real catalogue data will be processed on a routine basis, with adequate collision risk computation algorithms to propose conjunction avoidance manoeuvre optimised for every event. The optimisation of manoeuvres in an operational basis is not approached along this document. Currently, American Two Line Element (TLE) catalogue is the only public source of data providing orbits of objects in space to identify eventual conjunction events. Additionally, Conjunction Summary Message (CSM) is provided by Joint Space Operation Center (JSpOC) when the American system identifies a possible collision among satellites and debris. Depending on the data used for collision avoidance evaluation, the conjunction avoidance approach may be different. The main features of currently available data need to be analysed (in regards to accuracy) in order to perform estimation of eventual encounters to be found along the mission lifetime. In the case of TLE, as these data is not provided with accuracy information, operational collision avoidance may be also based on statistical accuracy information as the one used in the mission design approach. This is not the case for CSM data, which includes the state vector and orbital accuracy of the two involved objects. This aspect has been analysed in detail and is depicted in the document, evaluating in statistical way the characteristics of both data sets in regards to the main aspects related to collision avoidance. Once the analysis of data set was completed, investigations on the impact of those features in the most convenient avoidance approaches have been addressed (section 5.1). This analysis is published in a peer-reviewed journal [Snchez-Ortiz, 2015b]T.3. The analysis provides recommendations for different mission types (satellite size and orbital regime) in regards to the most appropriate collision avoidance approach for relevant risk reduction. The risk reduction capability is very much dependent on the accuracy of the catalogue utilized to identify eventual collisions. Approaches based on CSM data are recommended against the TLE based approach. Some approaches based on the maximum risk associated to envisaged encounters are demonstrated to report a very large number of events, which makes the approach not suitable for operational activities. Accepted Collision Probability Levels are recommended for the definition of the avoidance strategies for different mission types. For example for the case of a LEO satellite in the Sun-synchronous regime, the typically used ACPL value of 10-4 is not a suitable value for collision avoidance schemes based on TLE data. In this case the risk reduction capacity is almost null (due to the large uncertainties associated to TLE data sets, even for short time-to-event values). For significant reduction of risk when using TLE data, ACPL on the order of 10-6 (or lower) seems to be required, producing about 10 warnings per year and mission (if one-day ahead events are considered) or 100 warnings per year (for three-days ahead estimations). Thus, the main conclusion from these results is the lack of feasibility of TLE for a proper collision avoidance approach. On the contrary, for CSM data, and due to the better accuracy of the orbital information when compared with TLE, ACPL on the order of 10-4 allows to significantly reduce the risk. This is true for events estimated up to 3 days ahead. Even 5 days ahead events can be considered, but ACPL values down to 10-5 should be considered in such case. Even larger prediction times can be considered (7 days) for risk reduction about 90%, at the cost of larger number of warnings up to 5 events per year, when 5 days prediction allows to keep the manoeuvre rate in 2 manoeuvres per year. Dynamics of the GEO orbits is different to that in LEO, impacting on a lower increase of orbits uncertainty along time. On the contrary, uncertainties at short prediction times at this orbital regime are larger than those at LEO due to the differences in observation capabilities. Additionally, it has to be accounted that short prediction times feasible at LEO may not be appropriate for a GEO mission due to the orbital period being much larger at this regime. In the case of TLE data sets, significant reduction of risk is only achieved for small ACPL values, producing about a warning event per year if warnings are raised one day in advance to the event (too short for any reaction to be considered). Suitable ACPL values would lay in between 510-8 and 10-7, well below the normal values used in current operations for most of the GEO missions (TLE-based strategies for collision avoidance at this regime are not recommended). On the contrary, CSM data allows a good reduction of risk with ACPL in between 10-5 and 10-4 for short and medium prediction times. 10-5 is recommended for prediction times of five or seven days. The number of events raised for a suitable warning time of seven days would be about one in a 10-year mission. It must be noted, that these results are associated to a 2 m radius spacecraft, impact of the satellite size are also analysed within the thesis. In the future, other Space Situational Awareness Systems (SSA, ESA program) may provide additional catalogues of objects in space with the aim of reducing the risk. It is needed to investigate which are the required performances of those catalogues for allowing such risk reduction. The main performance aspects are coverage (objects included in the catalogue, mainly limited by a minimum object size derived from sensor performances) and the accuracy of the orbital data to accurately evaluate the conjunctions (derived from sensor performance in regards to object observation frequency and accuracy). The results of these investigations (section 5.2) are published in a peer-reviewed journal [Snchez-Ortiz, 2015a]T.2. This aspect was not initially foreseen as objective of the thesis, but it shows how the theory described in the thesis, initially defined for mission design in regards to avoidance manoeuvre fuel allocation (upper part of figure 1), is extended and serves for additional purposes as dimensioning a Space Surveillance and Tracking (SST) system (bottom part of figure below). The main difference between the two approaches is the consideration of the catalogue features as part of the theory which are not modified (for the satellite mission design case) instead of being an input for the analysis (in the case of the SST design). In regards to the outputs, all the features computed by the statistical conjunction analysis are of importance for mission design (with the objective of proper global avoidance strategy definition and fuel allocation), whereas for the case of SST design, the most relevant aspects are the manoeuvre and false alarm rates (defining a reliable system) and the Risk Reduction capability (driving the effectiveness of the system). In regards to the methodology for computing the risk, the SST system shall be driven by the capacity of providing the means to avoid catastrophic conjunction events (avoiding the dramatic increase of the population), whereas the satellite mission design should consider all type of encounters, as the operator is interested on avoiding both lethal and catastrophic collisions. From the analysis of the SST features (object coverage and orbital uncertainty) for a reliable system, it is concluded that those two characteristics are to be imposed differently for the different orbital regimes, as the population level is different depending on the orbit type. Coverage values range from 5 cm for very populated LEO regime up to 100 cm in the case of GEO region. The difference on this requirement derives mainly from the relative velocity of the encounters at those regimes. Regarding the orbital knowledge of the catalogues, very accurate information is required for objects in the LEO region in order to limit the number of false alarms, whereas intermediate orbital accuracy can be considered for higher orbital regimes. In regards to the operational collision avoidance approaches, several collision risk algorithms are used for evaluation of collision risk of two pair of objects. Figure 2 provides a summary of the different collision risk algorithm cases and indicates how they are covered along this document. The typical case with high relative velocity is well covered in literature for the case of spherical objects (case A), with a large number of available algorithms, that are not analysed in detailed in this work. Only a sample case is provided in section 4.2. If complex geometries are considered (Case B), a more realistic risk evaluation can be computed. New approach for the evaluation of risk in the case of complex geometries is presented in this thesis (section 4.4.2), and it has been presented in several international conferences. The developed algorithm allows evaluating the risk for complex objects formed by a set of boxes. A dedicated Monte Carlo method has also been described (section 4.1.2.3) and implemented to allow the evaluation of the actual collisions among a large number of simulation shots. This Monte Carlo runs are considered the truth for comparison of the algorithm results (section 4.4.4). For spacecrafts that cannot be considered as spheres, the consideration of the real geometry of the objects may allow to discard events which are not real conjunctions, or estimate with larger reliability the risk associated to the event. This is of particular importance for the case of large spacecrafts as the uncertainty in positions of actual catalogues does not reach small values to make a difference for the case of objects below meter size. As the tracking systems improve and the orbits of catalogued objects are known more precisely, the importance of considering actual shapes of the objects will become more relevant. The particular case of a very large system (as a tethered satellite) is analysed in section 5.4. Additionally, if the two colliding objects have low relative velocity (and simple geometries, case C in figure above), the most common collision risk algorithms fail and adequate theories need to be applied. In this document, a low relative velocity algorithm presented in the literature [Patera, 2001]R.26 is described and evaluated (section 4.5). Evaluation through comparison with Monte Carlo approach is provided in section 4.5.2. The main conclusion of this analysis is the suitability of this algorithm for the most common encounter characteristics, and thus it is selected as adequate for collision risk estimation. Its performances are evaluated in order to characterise when it can be safely used for a large variety of encounter characteristics. In particular, it is found that the need of using dedicated algorithms depend on both the size of collision volume in the B-plane and the miss-distance uncertainty. For large uncertainties, the need of such algorithms is more relevant since for small uncertainties the encounter duration where the covariance ellipsoids intersect is smaller. Additionally, its application for the case of complex satellite geometries is assessed (case D in figure above) by integrating the developed algorithm in this thesis with Pateras formulation for low relative velocity encounters. The results of this analysis show that the algorithm can be easily extended for collision risk estimation process suitable for complex geometry objects (section 4.5.3). The two algorithms, together with the Monte Carlo method, have been implemented in the operational tool CORAM for ESA which is used for the evaluation of collision risk of ESA operated missions, [Snchez-Ortiz, 2013a]T.11. This fact shows the interest and relevance of the developed algorithms for improvement of satellite operations. The algorithms have been presented in several international conferences, [Snchez-Ortiz, 2013b]T.9, [Pulido, 2014]T.7,[Grande-Olalla, 2013]T.10, [Pulido, 2014]T.5, [Snchez-Ortiz, 2015c]T.1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article tests a multidimensional model of the marketing and sales organizational interface, based on a previous one tested for European companies (Homburg et al., 2008), in a specific taxonomical configuration: a brand focused professional multinational, in three successful Latin American branches. Factor reliability and hypotheses were studied through a confirmatory factor analysis. Results show the existence of a positive relationship between formalization, joint planning, teamwork, information sharing, trust and interface quality. Interface quality and business performance show also a positive relationship. This empirical study contributes to the knowledge of the organizational enhancement of interactions in emerging markets

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.