946 resultados para System model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La característica fundamental de la Computación Natural se basa en el empleo de conceptos, principios y mecanismos del funcionamiento de la Naturaleza. La Computación Natural -y dentro de ésta, la Computación de Membranas- surge como una posible alternativa a la computación clásica y como resultado de la búsqueda de nuevos modelos de computación que puedan superar las limitaciones presentes en los modelos convencionales. En concreto, la Computación de Membranas se originó como un intento de formular un nuevo modelo computacional inspirado en la estructura y el funcionamiento de las células biológicas: los sistemas basados en este modelo constan de una estructura de membranas que actúan a la vez como separadores y como canales de comunicación, y dentro de esa estructura se alojan multiconjuntos de objetos que evolucionan de acuerdo a unas determinadas reglas de evolución. Al conjunto de dispositivos contemplados por la Computación de Membranas se les denomina genéricamente como Sistemas P. Hasta el momento los Sistemas P sólo han sido estudiados a nivel teórico y no han sido plenamente implementados ni en medios electrónicos, ni en medios bioquímicos, sólo han sido simulados o parcialmente implementados. Por tanto, la implantación de estos sistemas es un reto de investigación abierto. Esta tesis aborda uno de los problemas que debe ser resuelto para conseguir la implantación de los Sistemas P sobre plataformas hardware. El problema concreto se centra en el modelo de los Sistemas P de Transición y surge de la necesidad de disponer de algoritmos de aplicación de reglas que, independientemente de la plataforma hardware sobre la que se implementen, cumplan los requisitos de ser no deterministas, masivamente paralelos y además su tiempo de ejecución esté estáticamente acotado. Como resultado se ha obtenido un conjunto de algoritmos (tanto para plataformas secuenciales, como para plataformas paralelas) que se adecúan a las diferentes configuraciones de los Sistemas P. ABSTRACT The main feature of Natural Computing is the use of concepts, principles and mechanisms inspired by Nature. Natural Computing and within it, Membrane Computing emerges as an potential alternative to conventional computing and as from the search for new models of computation that may overcome the existing limitations in conventional models. Specifically, Membrane Computing was created to formulate a new computational paradigm inspired by the structure and functioning of biological cells: it consists of a membrane structure, which acts as separators as well as communication channels, and within this structure are stored multisets of objects that evolve according to certain evolution rules. The set of computing devices addressed by Membrane Computing are generically known P systems. Up to now, no P systems have been fully implemented yet in electronic or biochemical means. They only have been studied in theory, simulated or partially implemented. Therefore, the implementation of these systems is an open research challenge. This thesis addresses one of the problems to be solved in order to deploy P systems on hardware platforms. This specific problem is focused on the Transition P System model and emerges from the need of providing application rules algorithms that independently on the hardware platform on which they are implemented, meets the requirements of being nondeterministic, massively parallel and runtime-bounded. As a result, this thesis has developed a set of algorithms for both platforms, sequential and parallel, adapted to all possible configurations of P systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los servicios en red que conocemos actualmente están basados en documentos y enlaces de hipertexto que los relacionan entre sí sin aportar verdadera información acerca de los contenidos que representan. Podría decirse que se trata de “una red diseñada por personas para ser interpretada por personas”. El objetivo principal de los últimos años es encaminar esta red hacia una web de conocimiento, en la que la información pueda ser interpretada por agentes computerizados de manera automática. Para llevar a cabo esta transformación es necesaria la utilización de nuevas tecnologías especialmente diseñadas para la descripción de contenidos como son las ontologías. Si bien las redes convencionales están evolucionando, no son las únicas que lo están haciendo. El rápido crecimiento de las redes de sensores y el importante aumento en el número de dispositivos conectados a internet, hace necesaria la incorporación de tecnologías de la web semántica a este tipo de redes. Para la realización de este Proyecto de Fin de Carrera se utilizará la ontología SSN, diseñada para la descripción semántica de sensores y las redes de las que forman parte con el fin de permitir una mejor interacción entre los dispositivos y los sistemas que hacen uso de ellos. El trabajo desarrollado a lo largo de este Proyecto de Fin de Carrera gira en torno a esta ontología, siendo el principal objetivo la generación semiautomática de código a partir de un modelo de sistemas descrito en función de las clases y propiedades proporcionadas por SSN. Para alcanzar este fin se dividirá el proyecto en varias partes. Primero se realizará un análisis de la ontología mencionada. A continuación se describirá un sistema simulado de sensores y por último se implementarán las aplicaciones para la generación automática de interfaces y la representación gráfica de los dispositivos del sistema a partir de la representación del éste en un fichero de tipo OWL. ABSTRACT. The web we know today is based on documents and hypertext links that relate these documents with each another, without providing consistent information about the contents they represent. It could be said that its a network designed by people to be used by people. The main goal of the last couple of years is to guide this network into a web of knowledge, where information can be automatically processed by machines. This transformation, requires the use of new technologies specially designed for content description such as ontologies. Nowadays, conventional networks are not the only type of networks evolving. The use of sensor networks and the number of sensor devices connected to the Internet is rapidly increasing, making the use the integration of semantic web technologies to this kind of networks completely necessary. The SSN ontology will be used for the development of this Final Degree Dissertation. This ontology was design to semantically describe sensors and the networks theyre part of, allowing a better interaction between devices and the systems that use them. The development carried through this Final Degree Dissertation revolves around this ontology and aims to achieve semiautomatic code generation starting from a system model described based on classes and properties provided by SSN. To reach this goal, de Dissertation will be divided in several parts. First, an analysis about the mentioned ontology will be made. Following this, a simulated sensor system will be described, and finally, the implementation of the applications will take place. One of these applications will automatically generate de interfaces and the other one will graphically represents the devices in the sensor system, making use of the system representation in an OWL file.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current solutions to the interoperability problem in Home Automation systems are based on a priori agreements where protocols are standardized and later integrated through specific gateways. In this regards, spontaneous interoperability, or the ability to integrate new devices into the system with minimum planning in advance, is still considered a major challenge that requires new models of connectivity. In this paper we present an ontology-driven communication architecture whose main contribution is that it facilitates spontaneous interoperability at system model level by means of semantic integration. The architecture has been validated through a prototype and the main challenges for achieving complete spontaneous interoperability are also evaluated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Solar variability represents a source of uncertainty in the future forcings used in climate model simulations. Current knowledge indicates that a descent of solar activity into an extended minimum state is a possible scenario. With aid of experiments from a state-of-the-art Earth system model, we investigate the impact of a future solar minimum on Northern Hemisphere climate change projections. This scenario is constructed from recent 11 year solar-cycle minima of the solar spectral irradiance, and is therefore more conservative than the 'grand' minima employed in some previous modeling studies. Despite the small reduction in total solar irradiance (0.36 W m^-2), relatively large responses emerge in the winter Northern Hemisphere, with a reduction in regional-scale projected warming by up to 40%. To identify the origin of the enhanced regional signals, we assess the role of the different mechanisms by performing additional experiments forced only by irradiance changes at different wavelengths of the solar spectrum. We find that a reduction in visible irradiance drives changes in the stationary wave pattern of the North Pacific and sea-ice cover. A decrease in UV irradiance leads to smaller surface signals, although its regional effects are not negligible. These results point to a distinct but additive role of UV and visible irradiance in the Earth's climate, and stress the need to account for solar forcing as a source of uncertainty in regional scale projections.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho propõe dois métodos para teste de sistemas de software: o primeiro extrai ideias de teste de um modelo desenvolvido em rede de Petri hierárquica e o segundo valida os resultados após a realização dos testes utilizando um modelo em OWL-S. Estes processos aumentam a qualidade do sistema desenvolvido ao reduzir o risco de uma cobertura insuficiente ou teste incompleto de uma funcionalidade. A primeira técnica apresentada consiste de cinco etapas: i) avaliação do sistema e identificação dos módulos e entidades separáveis, ii) levantamento dos estados e transições, iii) modelagem do sistema (bottom-up), iv) validação do modelo criado avaliando o fluxo de cada funcionalidade e v) extração dos casos de teste usando uma das três coberturas de teste apresentada. O segundo método deve ser aplicado após a realização dos testes e possui cinco passos: i) primeiro constrói-se um modelo em OWL (Web Ontology Language) do sistema contendo todas as informações significativas sobre as regras de negócio da aplicação, identificando as classes, propriedades e axiomas que o regem; ii) em seguida o status inicial antes da execução é representado no modelo através da inserção das instâncias (indivíduos) presentes; iii) após a execução dos casos de testes, a situação do modelo deve ser atualizada inserindo (sem apagar as instâncias já existentes) as instâncias que representam a nova situação da aplicação; iv) próximo passo consiste em utilizar um reasoner para fazer as inferências do modelo OWL verificando se o modelo mantém a consistência, ou seja, se não existem erros na aplicação; v) finalmente, as instâncias do status inicial são comparadas com as instâncias do status final, verificando se os elementos foram alterados, criados ou apagados corretamente. O processo proposto é indicado principalmente para testes funcionais de caixa-preta, mas pode ser facilmente adaptado para testes em caixa branca. Obtiveram-se casos de testes semelhantes aos que seriam obtidos em uma análise manual mantendo a mesma cobertura do sistema. A validação provou-se condizente com os resultados esperados, bem como o modelo ontológico mostrouse bem fácil e intuitivo para aplicar manutenções.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La Ley General de Sanidad 14/1986 de 25 de Abril posibilitó el tránsito del antiguo modelo de Seguridad Social al actual modelo de Sistema Nacional de Salud (SNS), financiado con impuestos y de cobertura prácticamente universal. Desde entonces se han producido profundos cambios en el sistema que culminaron en el año 2002 con la descentralización total de competencias en materia de salud en las Comunidades Autónomas. La regulación nacional de competencias en materia de salud se realiza desde el Consejo Interterritorial del Sistema Nacional de Salud, organismo que agrupa a los máximos responsables autonómicos en materia de salud de cada Comunidad Autónoma y que tiene entre otras, la responsabilidad de evitar las desigualdades en servicios sanitarios dentro del territorio nacional. La creación y competencias del Consejo Interterritorial quedan recogidas en la Ley 16/2003 de 28 de mayo de Cohesión de la calidad del Sistema Nacional de Salud. La cartera de servicios comunes del SNS se establece en el Real Decreto 1030/2006 de 15 de Septiembre, actualizando el Real Decreto 63/1995 de 20 de enero sobre Ordenación de las prestaciones sanitarias, resultando del actual marco legislativo con la descentralización de competencias y gestión de los presupuestos un horizonte de posible variabilidad en los modelos de gestión de cada CCAA, que, si bien deben garantizar la universalidad de las prestaciones, también ofrece una diversidad de modalidades de gestionar los recursos en materia de salud. En cuanto al estado de salud de los españoles, destacar que la esperanza de vida al nacer se sitúa en 79,9 años, superior a la media europea, 78,3 años, y la esperanza de vida ajustada por incapacidad fue en 2002 de 72,6 años en España respecto a los 70,8 de la UE. Según cifras del propio Ministerio de Sanidad, la percepción de la salud de los ciudadanos fue positiva para un 73% de los hombres y un 63,2 de las mujeres. Alrededor del 60% de la población tiene un peso normal y la morbilidad sitúa en los primeros lugares las enfermedades del aparato circulatorio, el cáncer y las enfermedades del aparato respiratorio (CIE-9). El gasto sanitario en España, es un capítulo presupuestario importante, al situarse en torno al 7,5 del P.I.B, y los recursos e inversiones presentan aparentes desigualdades autonómicas. Los modelos de gestión y dependencia patrimonial de los recursos, variables entre Autonomías, plantean la necesidad de monitorizar un seguimiento que permita evaluar en los próximos diez años el impacto de la descentralización de competencias del Sistema. La estructura del Sistema tiene dos niveles asistenciales mayoritarios, atención primaria y especializada, absorbiendo la atención especializada la mayor parte del presupuesto. El incremento del gasto sanitario y la universalidad de las prestaciones han condicionado en gran medida la implantación de modelos de gestión diferentes a los tradicionales. Esta situación no es exclusiva del Estado Español. En los Estados del entorno de la Unión Europea, el Consejo de Ministros de Sanidad de la UE en su sesión celebrada los días 1 y 2 de Junio de 200625 concluyeron un documento que recoge los valores y principios comunes de los sistemas sanitarios de los países de la Unión Europea, resaltando los principios y valores de los sistemas sanitarios como soporte estructural de dichos estados. Como conclusión, en este momento (2007) el Sistema Nacional de Salud Español, está inmerso en un proceso de trasformación orientado a garantizar la eficiencia de las prestaciones de manera responsable, es decir, ofertar al ciudadano la mejor calidad de servicios al mínimo coste.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To address the connection between tropical African vegetation development and high-latitude climate change we present a high-resolution pollen record from ODP Site 1078 (off Angola) covering the period 50-10 ka BP. Although several tropical African vegetation and climate reconstructions indicate an impact of Heinrich Stadials (HSs) in Southern Hemisphere Africa, our vegetation record shows no response. Model simulations conducted with an Earth System Model of Intermediate Complexity including a dynamical vegetation component provide one possible explanation. Because both precipitation and evaporation increased during HSs and their effects nearly cancelled each other, there was a negligible change in moisture supply. Consequently, the resulting climatic response to HSs might have been too weak to noticeably affect the vegetation composition in the study area. Our results also show that the response to HSs in southern tropical Africa neither equals nor mirrors the response to abrupt climate change in northern Africa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kalman inverse filtering is used to develop a methodology for real-time estimation of forces acting at the interface between tyre and road on large off-highway mining trucks. The system model formulated is capable of estimating the three components of tyre-force at each wheel of the truck using a practical set of measurements and inputs. Good tracking is obtained by the estimated tyre-forces when compared with those simulated by an ADAMS virtual-truck model. A sensitivity analysis determines the susceptibility of the tyre-force estimates to uncertainties in the truck's parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Load-induced extravascular fluid flow has been postulated to play a role in mechanotransduction of physiological loads at the cellular level. Furthermore, the displaced fluid serves as a carrier for metabolites, nutrients, mineral precursors and osteotropic agents important for cellular activity. We hypothesise that load-induced fluid flow enhances the transport of these key substances, thus helping to regulate cellular activity associated with processes of functional adaptation and remodelling. To test this hypothesis, molecular tracer methods developed previously by our group were applied in vivo to observe and quantify the effects of load-induced fluid flow under four-point-bending loads. Preterminal tracer transport studies were carried out on 24 skeletally mature Sprague Dawley rats. Mechanical loading enhanced the transport of both small- and larger-molecular-mass tracers within the bony tissue of the tibial mid-diaphysis. Mechanical loading showed a highly significant effect on the number of periosteocytic spaces exhibiting tracer within the cross section of each bone. For all loading rates studied, the concentration of Procion Red tracer was consistently higher in the tibia subjected to pure bending loads than in the unloaded, contralateral tibia, Furthermore, the enhancement of transport was highly site-specific. In bones subjected to pure bending loads, a greater number of periosteocytic spaces exhibited the presence of tracer in the tension band of the cross section than in the compression band; this may reflect the higher strains induced in the tension band compared with the compression band within the mid-diaphysis of the rat tibia. Regardless of loading mode, the mean difference between the loaded side and the unloaded contralateral control side decreased with increasing loading frequency. Whether this reflects the length of exposure to the tracer or specific frequency effects cannot be determined by this set of experiments. These in vivo experimental results corroborate those of previous ex vivo and in vitro studies, Strain-related differences in tracer distribution provide support for the hypothesis that load-induced fluid flow plays a regulatory role in processes associated with functional adaptation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a method to analyze the first order eigenvalue sensitivity with respect to the operating parameters of a power system. The method is based on explicitly expressing the system state matrix into sub-matrices. The eigenvalue sensitivity is calculated based on the explicitly formed system state matrix. The 4th order generator model and 4th order exciter system model are used to form the system state matrix. A case study using New England 10-machine 39-bus system is provided to demonstrate the effectiveness of the proposed method. This method can be applied into large scale power system eigenvalue sensitivity with respect to operating parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Technological innovation has been widely studied: however surprisingly little is known about the experience of managing the process. Most reports tend to be generalistic and/or prescriptive whereas it is argued that multiple sources of variation in the process limit the value of these. A description of the innovation process is given together with a presentation of what is knovrn from existing studies. Gaps identified in this area suggest that a variety of organisational influences are important and an attempt is made to identify some of these at individual, group and organisational level. A simple system model of the innovation management process is developed. Further investigation of the influence of these factors was made possible through an extended on-site case study. Methodology for this based upon participant observation coupled wth a wide and flexible range of techniques is described. Evidence is presented about many aspects of the innovation process from a number of different levels and perspectives: the attempt is to demonstrate the extent to which variation due to contingent influences takes place. It is argued that problems identified all relate to the issue of integration. This theme is also developed from an analytical viewoint and it is suggested that organisational response to increases in complexity in the external environment will be to match them with internal complexity. Differentiation of this kind will require extensive and flexible integration, especially in those inherently uncertain areas associated with innovation. Whilst traditionally a function of management, it is argued that integration needs have increased to the point where a new specialism is required. The concept of integration specialist is developed from this analysis and attempts at simple integrative change during the research are described. Finally a strategy for integration - or rather for building in integrative capability - ln the organisation studied is described.