604 resultados para Workflows semânticos


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partiendo de los estudios de la interfaz semántico-sintáctica de Ken Hale y Jay Keyser (1993, 1998) y Jaume Mateu i Fontanals (2000, 2002), asumimos que la estructura argumental determinante de la organización de la sintaxis oracional tiene su origen en constructos semánticos sintácticamente estructurados. Estos constructos definen relaciones configuracionales entre predicados primitivos y argumentos. Existe, por lo tanto, una relación de transparencia entre la semántica y la sintaxis, la cual permite definir el significado oracional en función tanto del contenido conceptual-intencional, opaco para la sintaxis, como del constructo semántico, transparente para la sintaxis. A diferencia de otras propuestas, que postulan un máximo de cuatro constructos semánticos básicos, como es el caso de Hale y Keyser, o de tres constructos semánticos básicos, como es el caso de Mateu i Fontanals, proponemos que la Gramática Universal define un máximo de sólo dos constructos semánticos básicos, uno espacial y uno causativo, los cuales, por recursividad, darían origen a todas las configuraciones sintácticas de cualquier lengua natural. En un sistema lingüístico como el de la Morfología Distribuida, los constructos semánticos así definidos formarían parte de una lista "A" presintáctica, conformada por morfemas abstractos, vg. sin matriz fonológica asociada, los cuales codifican tanto rasgos intencionales-funcionales de índole procedimental, vg. instrucciones sobre la asignación de referencia, como raíces semántico-conceptuales de índole nominal, vg. entidades conceptuales genéricas. Estos constructos, seleccionables por el sistema computacional, definen esqueletos configuracionales básicos que organizan el ensamble de los morfemas abstractos pertinentes a cada derivación. Este modelo, por lo tanto, define la conformación composicional de los apareamientos entre significado y forma, a partir de procesos computacionales previos a la inserción de las piezas de vocabulario. Ofrecemos evidencia morfosintáctica que probaría la pertinencia y la productividad de un sistema binario de constructos semánticos

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of compensation is widely used in advanced transaction models as means of recovery from a failure. Similar concepts are adopted for providing transaction-like behaviour for long business processes supported by workflows technology. In general, it is not trivial to design compensating tasks for tasks in the context of a workflow. Actually, a task in a workflow process does not have to be compensatable in the sense that the forcibility of reverse operations of the task is not always guaranteed by the application semantics. In addition, the isolation requirement on data resources may make a task difficult to compensate. In this paper, we first look into the requirements that a compensating task has to satisfy. Then we introduce a new concept called confirmation. With the help of confirmation, we are able to modify most non-compensatable tasks so that they become compensatable. This can substantially increase the availability of shared resources and greatly improve backward recovery for workflow applications in case of failures. To effectively incorporate confirmation and compensation into a workflow management environment, a three level bottom-up workflow design method is introduced. The implementation issues of this design are also discussed. (C) 2003 Elsevier Science Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interconnecting business processes across systems and organisations is considered to provide significant benefits, such as greater process transparency, higher degrees of integration, facilitation of communication, and consequently higher throughput in a given time interval. However, to achieve these benefits requires tackling constraints. In the context of this paper these are privacy-requirements of the involved workflows and their mutual dependencies. Workflow views are a promising conceptional approach to address the issue of privacy; however this approach requires addressing the issue of interdependencies between workflow view and adjacent private workflow. In this paper we focus on three aspects concerning the support for execution of cross-organisational workflows that have been modelled with a workflow view approach: (i) communication between the entities of a view-based workflow model, (ii) their impact on an extended workflow engine, and (iii) the design of a cross-organisational workflow architecture (CWA). We consider communication aspects in terms of state dependencies and control flow dependencies. We propose to tightly couple private workflow and workflow view with state dependencies, whilst to loosely couple workflow views with control flow dependencies. We introduce a Petri-Net-based state transition approach that binds states of private workflow tasks to their adjacent workflow view-task. On the basis of these communication aspects we develop a CWA for view-based cross-organisational workflow execution. Its concepts are valid for mediated and unmediated interactions and express no choice of a particular technology. The concepts are demonstrated by a scenario, run by two extended workflow management systems. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflow technology has delivered effectively for a large class of business processes, providing the requisite control and monitoring functions. At the same time, this technology has been the target of much criticism due to its limited ability to cope with dynamically changing business conditions which require business processes to be adapted frequently, and/or its limited ability to model business processes which cannot be entirely predefined. Requirements indicate the need for generic solutions where a balance between process control and flexibility may be achieved. In this paper we present a framework that allows the workflow to execute on the basis of a partially specified model where the full specification of the model is made at runtime, and may be unique to each instance. This framework is based on the notion of process constraints. Where as process constraints may be specified for any aspect of the workflow, such as structural, temporal, etc. our focus in this paper is on a constraint which allows dynamic selection of activities for inclusion in a given instance. We call these cardinality constraints, and this paper will discuss their specification and validation requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O pesquisador científico necessita de informações precisas, em tempo hábil para conclusão de seus trabalhos. Com o advento da INTERNET, o processo de comunicação em linha, homem x máquina, mediado pelos mecanismos de busca, tornou-se, simultaneamente, um auxílio e uma dificuldade no processo de recuperação de informações. O pesquisador teve que adaptar-se ao modo de operar da INTERNET e incluiu conhecimentos de diferenças idiomáticas, de terminologia, além de utilizar instrumentos que lhe forneçam parâmetros para obter maior pertinência e relevância nos dados. O uso de agentes inteligentes para melhoria de resultados e a diminuição de ruídos semânticos têm sido apontados como soluções para aumento da precisão no resultado das buscas. O estudo de casos exploratório realizado analisa a pesquisa em linha a partir da teoria da informação e propõe duas formas de otimizar o processo comunicacional com vistas à pertinência e relevância dos dados obtidos: a primeira sugere a aplicação de algoritmos que utilizem o vocabulário controlado como mediador do processo de comunicação utilizando-se dos descritores para recuperação em linha. , e a segunda ressalta a importância dos agentes inteligentes no processo de comunicação homem-máquina.(AU)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intelligent environments aim at supporting the user in executing her everyday tasks, e.g. by guiding her through a maintenance or cooking procedure. This requires a machine processable representation of the tasks for which workflows have proven an efficient means. The increasing number of available sensors in intelligent environments can facilitate the execution of workflows. The sensors can help to recognize when a user has finished a step in the workflow and thus to automatically proceed to the next step. This can heavily reduce the amount of required user interaction. However, manually specifying the conditions for triggering the next step in a workflow is very cumbersome and almost impossible for environments which are not known at design time. In this paper, we present a novel approach for learning and adapting these conditions from observation. We show that the learned conditions can even outperform the quality as conditions manually specified by workflow experts. Thus, the presented approach is very well suited for automatically adapting workflows in intelligent environments and can in that way increase the efficiency of the workflow execution. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): D.2.11, D.1.3, D.3.1, J.3, C.2.4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a certification method for semantic web services compositions which aims to statically ensure its functional correctness. Certification method encompasses two dimensions of verification, termed base and functional dimensions. Base dimension concerns with the verification of application correctness of the semantic web service in the composition, i.e., to ensure that each service invocation given in the composition comply with its respective service definition. The certification of this dimension exploits the semantic compatibility between the invocation arguments and formal parameters of the semantic web service. Functional dimension aims to ensure that the composition satisfies a given specification expressed in the form of preconditions and postconditions. This dimension is formalized by a Hoare logic based calculus. Partial correctness specifications involving compositions of semantic web services can be derived from the deductive system proposed. Our work is also characterized by exploiting the use of a fragment of description logic, i.e., ALC, to express the partial correctness specifications. In order to operationalize the proposed certification method, we developed a supporting environment for defining the semantic web services compositions as well as to conduct the certification process. The certification method were experimentally evaluated by applying it in three different proof concepts. These proof concepts enabled to broadly evaluate the method certification

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scientific workflows orchestrate the execution of complex experiments frequently using distributed computing platforms. Meta-workflows represent an emerging type of such workflows which aim to reuse existing workflows from potentially different workflow systems to achieve more complex and experimentation minimizing workflow design and testing efforts. Workflow interoperability plays a profound role in achieving this objective. This paper is focused at fostering interoperability across meta-workflows that combine workflows of different workflow systems from diverse scientific domains. This is achieved by formalizing definitions of meta-workflow and its different types to standardize their data structures used to describe workflows to be published and shared via public repositories. The paper also includes thorough formalization of two workflow interoperability approaches based on this formal description: the coarse-grained and fine-grained workflow interoperability approach. The paper presents a case study from Astrophysics which successfully demonstrates the use of the concepts of meta-workflows and workflow interoperability within a scientific simulation platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La minería de opinión o análisis de sentimiento es un tipo de análisis de texto que pretende ayudar a la toma de decisiones a través de la extracción y el análisis de opiniones, identificando las opiniones positivas, negativas y neutras; y midiendo su repercusión en la percepción de un tópico. En este trabajo se propone un modelo de análisis de sentimiento basado en diccionarios, que a través de la semántica y de los patrones semánticos que conforman el texto a clasificar, permite obtener la polaridad del mismo, en la red social Twitter. Para el conjunto de datos de entrada al sistema se han considerado datos públicos obtenidos de la red social Twitter, de compañías del sector de las telecomunicaciones que operan en el mercado Español.