938 resultados para Domain Specific Architecture


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le partage des données de façon confidentielle préoccupe un bon nombre d’acteurs, peu importe le domaine. La recherche évolue rapidement, mais le manque de solutions adaptées à la réalité d’une entreprise freine l’adoption de bonnes pratiques d’affaires quant à la protection des renseignements sensibles. Nous proposons dans ce mémoire une solution modulaire, évolutive et complète nommée PEPS, paramétrée pour une utilisation dans le domaine de l’assurance. Nous évaluons le cycle entier d’un partage confidentiel, de la gestion des données à la divulgation, en passant par la gestion des forces externes et l’anonymisation. PEPS se démarque du fait qu’il utilise la contextualisation du problème rencontré et l’information propre au domaine afin de s’ajuster et de maximiser l’utilisation de l’ensemble anonymisé. À cette fin, nous présentons un algorithme d’anonymat fortement contextualisé ainsi que des mesures de performances ajustées aux analyses d’expérience.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional decision making research has often focused on one's ability to choose from a set of prefixed options, ignoring the process by which decision makers generate courses of action (i.e., options) in-situ (Klein, 1993). In complex and dynamic domains, this option generation process is particularly critical to understanding how successful decisions are made (Zsambok & Klein, 1997). When generating response options for oneself to pursue (i.e., during the intervention-phase of decision making) previous research has supported quick and intuitive heuristics, such as the Take-The-First heuristic (TTF; Johnson & Raab, 2003). When generating predictive options for others in the environment (i.e., during the assessment-phase of decision making), previous research has supported the situational-model-building process described by Long Term Working Memory theory (LTWM; see Ward, Ericsson, & Williams, 2013). In the first three experiments, the claims of TTF and LTWM are tested during assessment- and intervention-phase tasks in soccer. To test what other environmental constraints may dictate the use of these cognitive mechanisms, the claims of these models are also tested in the presence and absence of time pressure. In addition to understanding the option generation process, it is important that researchers in complex and dynamic domains also develop tools that can be used by `real-world' professionals. For this reason, three more experiments were conducted to evaluate the effectiveness of a new online assessment of perceptual-cognitive skill in soccer. This test differentiated between skill groups and predicted performance on a previously established test and predicted option generation behavior. The test also outperformed domain-general cognitive tests, but not a domain-specific knowledge test when predicting skill group membership. Implications for theory and training, and future directions for the development of applied tools are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le partage des données de façon confidentielle préoccupe un bon nombre d’acteurs, peu importe le domaine. La recherche évolue rapidement, mais le manque de solutions adaptées à la réalité d’une entreprise freine l’adoption de bonnes pratiques d’affaires quant à la protection des renseignements sensibles. Nous proposons dans ce mémoire une solution modulaire, évolutive et complète nommée PEPS, paramétrée pour une utilisation dans le domaine de l’assurance. Nous évaluons le cycle entier d’un partage confidentiel, de la gestion des données à la divulgation, en passant par la gestion des forces externes et l’anonymisation. PEPS se démarque du fait qu’il utilise la contextualisation du problème rencontré et l’information propre au domaine afin de s’ajuster et de maximiser l’utilisation de l’ensemble anonymisé. À cette fin, nous présentons un algorithme d’anonymat fortement contextualisé ainsi que des mesures de performances ajustées aux analyses d’expérience.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document presents an Enterprise Application Integration based proposal for research outcomes and technological information management. The proposal addresses national and international science and research outcomes information management, and corresponding information systems. Information systems interoperability problems, approaches, technologies and integration tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the integration platform to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta investigación analiza el impacto del Programa de Alimentación Escolar en el trabajo infantil en Colombia a través de varias técnicas de evaluación de impacto que incluyen emparejamiento simple, emparejamiento genético y emparejamiento con reducción de sesgo. En particular, se encuentra que este programa disminuye la probabilidad de que los escolares trabajen alrededor de un 4%. Además, se explora que el trabajo infantil se reduce gracias a que el programa aumenta la seguridad alimentaria, lo que consecuentemente cambia las decisiones de los hogares y anula la carga laboral en los infantes. Son numerosos los avances en primera infancia llevados a cabo por el Estado, sin embargo, estos resultados sirven de base para construir un marco conceptual en el que se deben rescatar y promover las políticas públicas alimentarias en toda la edad escolar.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Teniendo en cuenta el drástico aumento en Colombia y el mundo de la población adulta mayor la pirámide poblacional se ha invertido. Lo que ha generado que cada vez haya más adultos mayores y la esperanza de vida sea mayor. Motivo por el cual surge la importancia de conocer diversos aspectos del envejecimiento, entre ellos los estereotipos. Adicionalmente hay muy poca investigación relacionada con los estereotipos sobre el envejecimiento según el género y el periodo de desarrollo. Levy (2009) encontró que son los jóvenes quienes tienen más estereotipos negativos sobre el envejecimiento pues estos sienten que la vejez está muy lejos de su realidad actual y no es en una amenaza personal. Por otro lado Bodner, Bergman y Cohen (2012), encontraron que son los hombres quienes tienen más estereotipos negativos sobre el envejecimiento. La presente investigación tuvo como objetivo describir el efecto del periodo del desarrollo y el género en los estereotipos sobre el envejecimiento en 860 adultos colombianos. Se midió la variable de estereotipos sobre el envejecimiento a través del cuestionario de Ramírez y Palacios (2015) y el periodo del desarrollo y el género a través de un cuestionario de datos sociodemograficos. Contrario a lo esperado, los resultados mostraron que no existe relación entre los estereotipos negativos con el género, el periodo del desarrollo, ni en la interacción de estos. En cambio, se encontraron diferencias entre los estereotipos positivos el género y el periodo de desarrollo. Se considera importante continuar realizando investigaciones relacionadas con esta temática pues cada vez son más los adultos mayores y la manera en que nos relacionemos con ellos, va a determinar un mejor proceso de envejecimiento para ellos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigates how individuals can develop, exercise, and maintain autonomy and freedom in the presence of information technology. It is particularly interested in how information technology can impose autonomy constraints. The first part identifies a problem with current autonomy discourse: There is no agreed upon object of reference when bemoaning loss of or risk to an individual’s autonomy. Here, thesis introduces a pragmatic conceptual framework to classify autonomy constraints. In essence, the proposed framework divides autonomy in three categories: intrinsic autonomy, relational autonomy and informational autonomy. The second part of the thesis investigates the role of information technology in enabling and facilitating autonomy constraints. The analysis identifies eleven characteristics of information technology, as it is embedded in society, so-called vectors of influence, that constitute risk to an individual’s autonomy in a substantial way. These vectors are assigned to three sets that correspond to the general sphere of the information transfer process to which they can be attributed to, namely domain-specific vectors, agent-specific vectors and information recipient-specific vectors. The third part of the thesis investigates selected ethical and legal implications of autonomy constraints imposed by information technology. It shows the utility of the theoretical frameworks introduced earlier in the thesis when conducting an ethical analysis of autonomy-constraining technology. It also traces the concept of autonomy in the European Data Lawsand investigates the impact of cultural embeddings of individuals on efforts to safeguard autonomy, showing intercultural flashpoints of autonomy differences. In view of this, the thesis approaches the exercise and constraint of autonomy in presence of information technology systems holistically. It contributes to establish a common understanding of (intuitive) terminology and concepts, connects this to current phenomena arising out of ever-increasing interconnectivity and computational power and helps operationalize the protection of autonomy through application of the proposed frameworks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Blazor è un innovativo framework di Microsoft per lo sviluppo di applicazioni web in C#, HTML e CSS. Questo framework non possiede un designer visuale, ovvero un supporto grafico "drag-and-drop" alla creazione delle web applications. Questa tesi affronta la progettazione e la prototipazione di "Blazor Designer", un DSL (Domain-Specific Language) grafico a supporto dello sviluppo applicazioni web a pagina singola (SPA) sviluppato in collaborazione con IPREL Progetti srl, società del gruppo SACMI. Nella tesi si fa una analisi delle tecnologie messe a disposizione da Blazor, compreso WebAssembly, si discutono le caratteristiche e i vantaggi dei DSL, si descrive la progettazione e l'implementazione di "Blazor Designer" come estensione di Visual Studio. La conclusione riassume i risultati raggiunti, i limiti e le opportunità future: un DSL è effettivamente in grado di rendere più user-friendly e semplice lo sviluppo, ma lo strumento deve essere integrato per essere sfruttato pienamente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial Intelligence (AI) is gaining ever more ground in every sphere of human life, to the point that it is now even used to pass sentences in courts. The use of AI in the field of Law is however deemed quite controversial, as it could provide more objectivity yet entail an abuse of power as well, given that bias in algorithms behind AI may cause lack of accuracy. As a product of AI, machine translation is being increasingly used in the field of Law too in order to translate laws, judgements, contracts, etc. between different languages and different legal systems. In the legal setting of Company Law, accuracy of the content and suitability of terminology play a crucial role within a translation task, as any addition or omission of content or mistranslation of terms could entail legal consequences for companies. The purpose of the present study is to first assess which neural machine translation system between DeepL and ModernMT produces a more suitable translation from Italian into German of the atto costitutivo of an Italian s.r.l. in terms of accuracy of the content and correctness of terminology, and then to assess which translation proves to be closer to a human reference translation. In order to achieve the above-mentioned aims, two human and automatic evaluations are carried out based on the MQM taxonomy and the BLEU metric. Results of both evaluations show an overall better performance delivered by ModernMT in terms of content accuracy, suitability of terminology, and closeness to a human translation. As emerged from the MQM-based evaluation, its accuracy and terminology errors account for just 8.43% (as opposed to DeepL’s 9.22%), while it obtains an overall BLEU score of 29.14 (against DeepL’s 27.02). The overall performances however show that machines still face barriers in overcoming semantic complexity, tackling polysemy, and choosing domain-specific terminology, which suggests that the discrepancy with human translation may still be remarkable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays the idea of injecting world or domain-specific structured knowledge into pre-trained language models (PLMs) is becoming an increasingly popular approach for solving problems such as biases, hallucinations, huge architectural sizes, and explainability lack—critical for real-world natural language processing applications in sensitive fields like bioinformatics. One recent work that has garnered much attention in Neuro-symbolic AI is QA-GNN, an end-to-end model for multiple-choice open-domain question answering (MCOQA) tasks via interpretable text-graph reasoning. Unlike previous publications, QA-GNN mutually informs PLMs and graph neural networks (GNNs) on top of relevant facts retrieved from knowledge graphs (KGs). However, taking a more holistic view, existing PLM+KG contributions mainly consider commonsense benchmarks and ignore or shallowly analyze performances on biomedical datasets. This thesis start from a propose of a deep investigation of QA-GNN for biomedicine, comparing existing or brand-new PLMs, KGs, edge-aware GNNs, preprocessing techniques, and initialization strategies. By combining the insights emerged in DISI's research, we introduce Bio-QA-GNN that include a KG. Working with this part has led to an improvement in state-of-the-art of MCOQA model on biomedical/clinical text, largely outperforming the original one (+3.63\% accuracy on MedQA). Our findings also contribute to a better understanding of the explanation degree allowed by joint text-graph reasoning architectures and their effectiveness on different medical subjects and reasoning types. Codes, models, datasets, and demos to reproduce the results are freely available at: \url{https://github.com/disi-unibo-nlp/bio-qagnn}.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to create hybrid systems that blend different paradigms has now become a requirement for complex AI systems usually made of more than a component. In this way, it is possible to exploit the advantages of each paradigm and exploit the potential of different approaches such as symbolic and non-symbolic approaches. In particular, symbolic approaches are often exploited for their efficiency, effectiveness and ability to manage large amounts of data, while symbolic approaches are exploited to ensure aspects related to explainability, fairness, and trustworthiness in general. The thesis lies in this context, in particular in the design and development of symbolic technologies that can be easily integrated and interoperable with other AI technologies. 2P-Kt is a symbolic ecosystem developed for this purpose, it provides a logic-programming (LP) engine which can be easily extended and customized to deal with specific needs. The aim of this thesis is to extend 2P-Kt to support constraint logic programming (CLP) as one of the main paradigms for solving highly combinatorial problems given a declarative problem description and a general constraint-propagation engine. A real case study concerning school timetabling is described to show a practical usage of the CLP(FD) library implemented. Since CLP represents only a particular scenario for extending LP to domain-specific scenarios, in this thesis we present also a more general framework: Labelled Prolog, extending LP with labelled terms and in particular labelled variables. The designed framework shows how it is possible to frame all variations and extensions of LP under a single language reducing the huge amount of existing languages and libraries and focusing more on how to manage different domain needs using labels which can be associated with every kind of term. Mapping of CLP into Labeled Prolog is also discussed as well as the benefits of the provided approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, we have compared the effector functions and fate of a number of human CTL clones in vitro or ex vivo following contact with variant peptides presented either on the cell surface or in a soluble multimeric format. In the presence of CD8 coreceptor binding, there is a good correlation between TCR signaling, killing of the targets, and Fast-mediated CTL apoptosis. Blocking CD8 binding using (alpha3 domain mutants of MHC class I results in much reduced signaling and reduced killing of the targets. Surprisingly, however, Fast expression is induced to a similar degree on these CTLs, and apoptosis of CTL is unaffected. The ability to divorce these events may allow the deletion of antigen-specific and pathological CTL populations without the deleterious effects induced by full CTL activation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vesicular carriers for intracellular transport associate with unique sets of accessory molecules that dictate budding and docking on specific membrane domains. Although many of these accessory molecules are peripheral membrane proteins, in most cases the targeting sequences responsible for their membrane recruitment have yet to be identified. We have previously defined a novel Golgi targeting domain (GRIP) shared by a family of coiled-coil peripheral membrane Golgi proteins implicated in membrane trafficking. We show here that the docking site for the GRIP motif of p230 is a specific domain of Golgi. membranes. By immunoelectron microscopy of HeLa cells stably expressing a green fluorescent protein (GFP)-p230(GRIP) fusion protein, we show binding specifically to a subset of membranes of the trans-Golgi network (TGN). Real-time imaging of live HeLa cells revealed that the GFP-p230(GRIP) was associated with highly dynamic tubular extensions of the TGN, which have the appearance and behaviour of transport carriers. To further define the nature of the GRIP membrane binding site, in vitro budding assays were performed using purified rat liver Golgi membranes and cytosol from GFP-p230(GRIP) transfected cells. Analysis of Golgi-derived vesicles by sucrose gradient fractionation demonstrated that GFP-p230(GRIP) binds to a specific population of vesicles distinct from those labelled for beta -COP or gamma -adaptin. The GFP-p230(GRIP) fusion protein is recruited to the same vesicle population as full-length p230, demonstrating that the GRIP domain is solely proficient as a targeting signal for membrane binding of the native molecule. Therefore, p230 GRIP is a targeting signal for recruitment to a highly selective membrane attachment site on a specific population of trans-Golgi network tubulovesicular carriers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Arquitetura Corporativa promove o estabelecimento de uma visão holística da estrutura e forma de trabalho de uma organização. Um dos aspectos abordados em Arquitetura Corporativa está associada a "estrutura ativa" da organização, que diz respeito a “quem" realiza as atividades organizacionais. Várias abordagens têm sido propostas a fim de proporcionar um meio para a representação de Arquitetura Corporativa, entre as quais ARIS, RM-ODP, UPDM e ArchiMate. Apesar da aceitação por parte da comunidade, as abordagens existentes se concentram em propósitos diferentes, têm limitações de escopo e algumas não têm semântica de mundo real bem definida. Além das abordagens de modelagem, muitas abordagens de ontologias têm sido propostas, a fim de descrever o domínio de estrutura ativa, incluindo as ontologias de SUPER Project, TOVE, Enterprise Ontology e W3C Org Ontology. Embora especificadas para fundamentação semântica e negociação de significado, algumas das abordagens propostas têm fins específicos e cobertura limitada. Além disso, algumas das abordagens não são definidas usando linguagens formais e outras são especificadas usando linguagens sem semântica bem definida. Este trabalho apresenta uma ontologia de referência bem fundamentada para o domínio organizacional. A ontologia organizacional de referência apresentada abrange os aspectos básicos discutidos na literatura organizacional, tais como divisão do trabalho, relações sociais e classificação das unidades estruturais. Além disso, também abrange os aspectos organizacionais definidos em abordagens existentes, levando em consideração tanto abordagens de modelagem quanto abordagens ontológicas. A ontologia resultante é especificada em OntoUML e estende os conceitos sociais de UFO-C.