989 resultados para Solution-processing
Resumo:
Biodegradable nanoparticles are at the forefront of drug delivery research as they provide numerous advantages over traditional drug delivery methods. An important factor affecting the ability of nanoparticles to circulate within the blood stream and interact with cells is their morphology. In this study a novel processing method, confined impinging jet mixing, was used to form poly (lactic acid) nanoparticles through a solvent-diffusion process with Pluronic F-127 being used as a stabilizing agent. This study focused on the effects of Reynolds number (flow rate), surfactant presence in mixing, and polymer concentration on the morphology of poly (lactic acid) nanoparticles. In addition to looking at the parameters affecting poly (lactic acid) morphology, this study attempted to improve nanoparticle isolation and purification methods to increase nanoparticle yield and ensure specific morphologies were not being excluded during isolation and purification. The isolation and purification methods used in this study were centrifugation and a stir cell. This study successfully produced particles having pyramidal and cubic morphologies. Despite successful production of these morphologies the yield of non-spherical particles was very low, additionally great variability existed between redundant trails. Surfactant was determined to be very important for the stabilization of nanoparticles in solution but appears to be unnecessary for the formation of nanoparticles. Isolation and purification methods that produce a high yield of surfactant free particles have still not been perfected and additional testing will be necessary for improvement.¿
Resumo:
Triggered event-related functional magnetic resonance imaging requires sparse intervals of temporally resolved functional data acquisitions, whose initiation corresponds to the occurrence of an event, typically an epileptic spike in the electroencephalographic trace. However, conventional fMRI time series are greatly affected by non-steady-state magnetization effects, which obscure initial blood oxygen level-dependent (BOLD) signals. Here, conventional echo-planar imaging and a post-processing solution based on principal component analysis were employed to remove the dominant eigenimages of the time series, to filter out the global signal changes induced by magnetization decay and to recover BOLD signals starting with the first functional volume. This approach was compared with a physical solution using radiofrequency preparation, which nullifies magnetization effects. As an application of the method, the detectability of the initial transient BOLD response in the auditory cortex, which is elicited by the onset of acoustic scanner noise, was used to demonstrate that post-processing-based removal of magnetization effects allows to detect brain activity patterns identical with those obtained using the radiofrequency preparation. Using the auditory responses as an ideal experimental model of triggered brain activity, our results suggest that reducing the initial magnetization effects by removing a few principal components from fMRI data may be potentially useful in the analysis of triggered event-related echo-planar time series. The implications of this study are discussed with special caution to remaining technical limitations and the additional neurophysiological issues of the triggered acquisition.
Processing and characterization of PbSnTe-based thermoelectric materials made by mechanical alloying
Resumo:
The research reported in this dissertation investigates the processes required to mechanically alloy Pb1-xSnxTe and AgSbTe2 and a method of combining these two end compounds to result in (y)(AgSbTe2)–(1 - y)(Pb1-xSnxTe) thermoelectric materials for power generation applications. In general, traditional melt processing of these alloys has employed high purity materials that are subjected to time and energy intensive processes that result in highly functional material that is not easily reproducible. This research reports the development of mechanical alloying processes using commercially available 99.9% pure elemental powders in order to provide a basis for the economical production of highly functional thermoelectric materials. Though there have been reports of high and low ZT materials fabricated by both melt alloying and mechanical alloying, the processing-structure-properties-performance relationship connecting how the material is made to its resulting functionality is poorly understood. This is particularly true for mechanically alloyed material, motivating an effort to investigate bulk material within the (y)(AgSbTe2)–(1 - y)(Pb1-xSnx- Te) system using the mechanical alloying method. This research adds to the body of knowledge concerning the way in which mechanical alloying can be used to efficiently produce high ZT thermoelectric materials. The processes required to mechanically alloy elemental powders to form Pb1-xSnxTe and AgSbTe2 and to subsequently consolidate the alloyed powder is described. The composition, phases present in the alloy, volume percent, size and spacing of the phases are reported. The room temperature electronic transport properties of electrical conductivity, carrier concentration and carrier mobility are reported for each alloy and the effect of the presence of any secondary phase on the electronic transport properties is described. An mechanical mixing approach for incorporating the end compounds to result in (y)(AgSbTe2)–(1-y)(Pb1-xSnxTe) is described and when 5 vol.% AgSbTe2 was incorporated was found to form a solid solution with the Pb1-xSnxTe phase. An initial attempt to change the carrier concentration of the Pb1-xSnxTe phase was made by adding excess Te and found that the carrier density of the alloys in this work are not sensitive to excess Te. It has been demonstrated using the processing techniques reported in this research that this material system, when appropriately doped, has the potential to perform as highly functional thermoelectric material.
Resumo:
Thermal stability of nanograined metals can be difficult to attain due to the large driving force for grain growth that arises from the significant boundary area constituted by the nanostructure. Kinetic approaches for stabilization of the nanostructure effective at low homologous temperatures often fail at higher homologous temperatures. Thermodynamic approaches for thermal stabilization may offer higher temperature stability. In this research, modest alloying of aluminum with solute (1 at.% Sc, Yb, or Sr) was examined as a means to thermodynamically stabilize a bulk nanostructure at elevated temperatures. After using melt-spinning and ball-milling to create an extended solid-solution and nanostructure with average grain size on the order of 30-45 nm, 1 h annealing treatments at 673 K (0.72 Tm) , 773 K (0.83 Tm) , and 873 K (0.94 Tm) were applied. The alloys remain nanocrystalline (<100 nm) as measured by Warren-Averbach Fourier analysis of x-ray diffraction peaks and direct observation of TEM dark field micrographs, with the efficacy of stabilization: Sr>Yb>Sc. Disappearance of intermetallic phases in the Sr and Yb alloys in the x-ray diffraction spectra are observed to occur coincident with the stabilization after annealing, suggesting that precipitates dissolve and the boundaries are enriched with solute. Melt-spinning has also been shown to be an effective process to produce a class of ordered, but non-periodic crystals called quasicrystals. However, many of the factors related to the creation of the quasicrystals through melt-spinning are not optimized for specific chemistries and alloy systems. In a related but separate aspect of this research, meltspinning was utilized to create metastable quasicrystalline Al6Mn in an α-Al matrix through rapid solidification of Al-8Mn (by mol) and Al-10Mn (by mol) alloys. Wheel speed of the melt-spinning wheel and orifice diameter of the tube reservoir were varied to determine their effect on the resulting volume proportions of the resultant phases using integrated areas of collected x-ray diffraction spectra. The data were then used to extrapolate parameters for the Al-10Mn alloy which consistently produced Al6Mn quasicrystal with almost complete suppression of the equilibrium Al6Mn orthorhombic phase.
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.
Resumo:
The Center for Orbit Determination in Europe (CODE) is contributing as a global Analysis center to the International GNSS Service (IGS) since many years. The processing of GPS and GLONASS data is well established in CODE’s ultra-rapid, rapid, and final product lines. With the introduction of new signals for the established and new GNSS, new challenges and opportunities are arising for the GNSS data management and processing. The IGS started the Multi-GNSS-EXperiment (MGEX) in 2012 in order to gain first experience with the new data formats and to develop new strategies for making optimal use of these additional measurements. CODE has started to contribute to IGS MGEX with a consistent, rigorously combined triple-system orbit solution (GPS, GLONASS, and Galileo). SLR residuals for the computed Galileo satellite orbits are of the order of 10 cm. Furthermore CODE established a GPS and Galileo clock solution. A quality assessment shows that these experimental orbit and clock products allow even a Galileo-only precise point positioning (PPP) with accuracies on the decimeter- (static PPP) to meter-level (kinematic PPP) for selected stations.
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Reverberation chambers are well known for providing a random-like electric field distribution. Detection of directivity or gain thereof requires an adequate procedure and smart post-processing. In this paper, a new method is proposed for estimating the directivity of radiating devices in a reverberation chamber (RC). The method is based on the Rician K-factor whose estimation in an RC benefits from recent improvements. Directivity estimation relies on the accurate determination of the K-factor with respect to a reference antenna. Good agreement is reported with measurements carried out in near-field anechoic chamber (AC) and using a near-field to far-field transformation.
Resumo:
Based on laser beam intensities above 109 W/cm2 with pulse energy of several Joules and duration of nanoseconds, Laser Shock Processing (LSP) is capable of inducing a surface compressive residual stress field. The paper presents experimental results showing the ability of LSP to improve the mechanical strength and cracking resistance of AA2024-T351 friction stir welded (FSW) joints. After introducing the FSW and LSP procedures, the results of microstructural analysis and micro-hardness are discussed. Video Image Correlation was used to measure the displacement and strain fields produced during tensile testing of flat specimens; the local and overall tensile behavior of native FSW joints vs. LSP treated were analyzed. Further, results of slow strain rate tensile testing of the FSW joints, native and LSP treated, performed in 3.5% NaCl solution are presented. The ability of LSP to improve the structural behavior of the FSW joints is underscored.
Resumo:
Evolvable hardware (EH) is an interesting alternative to conventional digital circuit design, since autonomous generation of solutions for a given task permits self-adaptivity of the system to changing environments, and they present inherent fault tolerance when evolution is intrinsically performed. Systems based on FPGAs that use Dynamic and Partial Reconfiguration (DPR) for evolving the circuit are an example. Also, thanks to DPR, these systems can be provided with scalability, a feature that allows a system to change the number of allocated resources at run-time in order to vary some feature, such as performance. The combination of both aspects leads to scalable evolvable hardware (SEH), which changes in size as an extra degree of freedom when trying to achieve the optimal solution by means of evolution. The main contributions of this paper are an architecture of a scalable and evolvable hardware processing array system, some preliminary evolution strategies which take scalability into consideration, and to show in the experimental results the benefits of combined evolution and scalability. A digital image filtering application is used as use case.
Resumo:
A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
RESUMEN En los últimos años, debido al incremento en la demanda por parte de las empresas de tecnologías que posibiliten la monitorización y el análisis de un gran volumen de datos en tiempo real, la tecnología CEP (Complex Event Processing) ha surgido como una potencia en alza y su uso se ha incrementado notablemente en ciertos sectores como, por ejemplo, la gestión y automatización de procesos de negocios, finanzas, monitorización de redes y aplicaciones, así como redes de sensores inteligentes como el caso de estudio en el que nos centraremos. CEP se basa en un lenguaje de procesamiento de eventos (Event Processing Language,EPL) cuya utilización puede resultar bastante compleja para usuarios inexpertos. Esta complejidad supone un hándicap y, por lo tanto, un problema a la hora de que su uso se extienda. Este Proyecto Fin de Grado (PFG) pretende dar una solución a este problema, acercando al usuario la tecnología CEP mediante técnicas de abstracción y modelado. Para ello, este PFG ha definido un lenguaje de modelado específico dominio, sencillo e intuitivo para el usuario inexperto, al que se ha dado soporte mediante el desarrollo de una herramienta de modelado gráfico (CEP Modeler) en la que se pueden modelar consultas CEP de forma gráfica, sencilla y de manera más accesible para el usuario. ABSTRACT Over recent years, more and more companies demand technology for monitoring and analyzing a vast volume of data in real time. In this regard, the CEP technology (Complex Event Processing) has emerged as a novel approach to that end, and its use has increased dramatically in certain domains, such as, management and automation of business processes, finance, monitoring of networks and applications, as well as smart sensor networks as the case study in which we will focus. CEP is based on in the Event Processing Language (EPL). This language can be rather difficult to use for new users. This complexity can be a handicap, and therefore, a problem at the time of extending its use. This project aims to provide a solution to this problem, trying to approach the CEP technology to users through abstraction and modelling techniques. To that end, this project has defined an intuitive and simple domain-specific modelling language for new users through a web tool (CEP Modeler) for graphically modeling CEP queries, in an easier and more accessible way.
Resumo:
Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014