925 resultados para Domain-specific languages engineering
Resumo:
Teniendo en cuenta el drástico aumento en Colombia y el mundo de la población adulta mayor la pirámide poblacional se ha invertido. Lo que ha generado que cada vez haya más adultos mayores y la esperanza de vida sea mayor. Motivo por el cual surge la importancia de conocer diversos aspectos del envejecimiento, entre ellos los estereotipos. Adicionalmente hay muy poca investigación relacionada con los estereotipos sobre el envejecimiento según el género y el periodo de desarrollo. Levy (2009) encontró que son los jóvenes quienes tienen más estereotipos negativos sobre el envejecimiento pues estos sienten que la vejez está muy lejos de su realidad actual y no es en una amenaza personal. Por otro lado Bodner, Bergman y Cohen (2012), encontraron que son los hombres quienes tienen más estereotipos negativos sobre el envejecimiento. La presente investigación tuvo como objetivo describir el efecto del periodo del desarrollo y el género en los estereotipos sobre el envejecimiento en 860 adultos colombianos. Se midió la variable de estereotipos sobre el envejecimiento a través del cuestionario de Ramírez y Palacios (2015) y el periodo del desarrollo y el género a través de un cuestionario de datos sociodemograficos. Contrario a lo esperado, los resultados mostraron que no existe relación entre los estereotipos negativos con el género, el periodo del desarrollo, ni en la interacción de estos. En cambio, se encontraron diferencias entre los estereotipos positivos el género y el periodo de desarrollo. Se considera importante continuar realizando investigaciones relacionadas con esta temática pues cada vez son más los adultos mayores y la manera en que nos relacionemos con ellos, va a determinar un mejor proceso de envejecimiento para ellos.
Resumo:
ResumenPresentamos en este artículo una reflexión sobre la producción de los significados y representaciones del espacio geográfico, a partir de una discusión sobre educación escolar en el componente curricular, Geografía en la Educación Básica. Proponemos una interrogación acerca de su aporte en la construcción de las percepciones espaciales, desencadenando este pensar especialmente sobre el uso, las limitaciones y las posibilidades reveladas en los mapas. De ese modo, invitamos a un cuestionamiento sobre las formas de utilización de ese recurso, que exige elecciones codificadas, simplificadas y representadas según criterios de escalas, símbolos y proyecciones cartográficas.Discutimos cómo la Geografía escolar utiliza los mapas y cómo propone los significados y representaciones de ese lenguaje en imagen, en la medida que son configuraciones producidas y cuyos contenidos son escogidos. Argumentamos que la educación escolar responde en gran parte a las representaciones construidas por las personas. Por eso, el presente texto es un ejercicio relevante para los educadores en general, una invitación al análisis de la utilización de los lenguajes específicos de cada área del conocimiento. Fortalecemos, de esa manera, el debate sobre las producciones de las representaciones, esenciales en los movimientos de la vida cotidiana de las personas que pasan por la escuela proyectando un currículo más abierto.Palabras clave: Aprendizaje escolar, significados, representaciones, Geografía escolar, mapa.AbstractWe present in this article a reflection on the production of meanings and representations in the geographical space, from a discussion on school education in the curriculum component, Geography in Basic Education. We propose a question about its contribution to the construction of spatial perceptions, triggering an analysis particularly about the use, limitations and opportunities revealed in the maps.Thus, we invite for discussion ways of utilizing that resource which requires encoded choices, simplified and represented according to criteria of scale, symbols and map projections. We discuss how the Geography student uses the maps and how the student applies the meanings and representations of that picture language in the settings as they are produced and whose contents are chosen.We argue that education is greatly due to the representations constructed by people. Therefore, this text is an important exercise for educators in general, an invitation to the analysis of the use of specific languages for each area of knowledge. In this way, we strengthen the debate over the production of essential representations in the movements of the daily life of the people passing through the school by projecting a more open curriculum.Key words: Elementary education, meanings, representations, school geography, maps.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
This thesis investigates how individuals can develop, exercise, and maintain autonomy and freedom in the presence of information technology. It is particularly interested in how information technology can impose autonomy constraints. The first part identifies a problem with current autonomy discourse: There is no agreed upon object of reference when bemoaning loss of or risk to an individual’s autonomy. Here, thesis introduces a pragmatic conceptual framework to classify autonomy constraints. In essence, the proposed framework divides autonomy in three categories: intrinsic autonomy, relational autonomy and informational autonomy. The second part of the thesis investigates the role of information technology in enabling and facilitating autonomy constraints. The analysis identifies eleven characteristics of information technology, as it is embedded in society, so-called vectors of influence, that constitute risk to an individual’s autonomy in a substantial way. These vectors are assigned to three sets that correspond to the general sphere of the information transfer process to which they can be attributed to, namely domain-specific vectors, agent-specific vectors and information recipient-specific vectors. The third part of the thesis investigates selected ethical and legal implications of autonomy constraints imposed by information technology. It shows the utility of the theoretical frameworks introduced earlier in the thesis when conducting an ethical analysis of autonomy-constraining technology. It also traces the concept of autonomy in the European Data Lawsand investigates the impact of cultural embeddings of individuals on efforts to safeguard autonomy, showing intercultural flashpoints of autonomy differences. In view of this, the thesis approaches the exercise and constraint of autonomy in presence of information technology systems holistically. It contributes to establish a common understanding of (intuitive) terminology and concepts, connects this to current phenomena arising out of ever-increasing interconnectivity and computational power and helps operationalize the protection of autonomy through application of the proposed frameworks.
Resumo:
Blazor è un innovativo framework di Microsoft per lo sviluppo di applicazioni web in C#, HTML e CSS. Questo framework non possiede un designer visuale, ovvero un supporto grafico "drag-and-drop" alla creazione delle web applications. Questa tesi affronta la progettazione e la prototipazione di "Blazor Designer", un DSL (Domain-Specific Language) grafico a supporto dello sviluppo applicazioni web a pagina singola (SPA) sviluppato in collaborazione con IPREL Progetti srl, società del gruppo SACMI. Nella tesi si fa una analisi delle tecnologie messe a disposizione da Blazor, compreso WebAssembly, si discutono le caratteristiche e i vantaggi dei DSL, si descrive la progettazione e l'implementazione di "Blazor Designer" come estensione di Visual Studio. La conclusione riassume i risultati raggiunti, i limiti e le opportunità future: un DSL è effettivamente in grado di rendere più user-friendly e semplice lo sviluppo, ma lo strumento deve essere integrato per essere sfruttato pienamente.
Resumo:
Nowadays the idea of injecting world or domain-specific structured knowledge into pre-trained language models (PLMs) is becoming an increasingly popular approach for solving problems such as biases, hallucinations, huge architectural sizes, and explainability lack—critical for real-world natural language processing applications in sensitive fields like bioinformatics. One recent work that has garnered much attention in Neuro-symbolic AI is QA-GNN, an end-to-end model for multiple-choice open-domain question answering (MCOQA) tasks via interpretable text-graph reasoning. Unlike previous publications, QA-GNN mutually informs PLMs and graph neural networks (GNNs) on top of relevant facts retrieved from knowledge graphs (KGs). However, taking a more holistic view, existing PLM+KG contributions mainly consider commonsense benchmarks and ignore or shallowly analyze performances on biomedical datasets. This thesis start from a propose of a deep investigation of QA-GNN for biomedicine, comparing existing or brand-new PLMs, KGs, edge-aware GNNs, preprocessing techniques, and initialization strategies. By combining the insights emerged in DISI's research, we introduce Bio-QA-GNN that include a KG. Working with this part has led to an improvement in state-of-the-art of MCOQA model on biomedical/clinical text, largely outperforming the original one (+3.63\% accuracy on MedQA). Our findings also contribute to a better understanding of the explanation degree allowed by joint text-graph reasoning architectures and their effectiveness on different medical subjects and reasoning types. Codes, models, datasets, and demos to reproduce the results are freely available at: \url{https://github.com/disi-unibo-nlp/bio-qagnn}.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Mode of access: Internet.
Resumo:
"Names of books referred to": p. [vii]-xiii.
Resumo:
In this study, we have compared the effector functions and fate of a number of human CTL clones in vitro or ex vivo following contact with variant peptides presented either on the cell surface or in a soluble multimeric format. In the presence of CD8 coreceptor binding, there is a good correlation between TCR signaling, killing of the targets, and Fast-mediated CTL apoptosis. Blocking CD8 binding using (alpha3 domain mutants of MHC class I results in much reduced signaling and reduced killing of the targets. Surprisingly, however, Fast expression is induced to a similar degree on these CTLs, and apoptosis of CTL is unaffected. The ability to divorce these events may allow the deletion of antigen-specific and pathological CTL populations without the deleterious effects induced by full CTL activation.
Resumo:
Vesicular carriers for intracellular transport associate with unique sets of accessory molecules that dictate budding and docking on specific membrane domains. Although many of these accessory molecules are peripheral membrane proteins, in most cases the targeting sequences responsible for their membrane recruitment have yet to be identified. We have previously defined a novel Golgi targeting domain (GRIP) shared by a family of coiled-coil peripheral membrane Golgi proteins implicated in membrane trafficking. We show here that the docking site for the GRIP motif of p230 is a specific domain of Golgi. membranes. By immunoelectron microscopy of HeLa cells stably expressing a green fluorescent protein (GFP)-p230(GRIP) fusion protein, we show binding specifically to a subset of membranes of the trans-Golgi network (TGN). Real-time imaging of live HeLa cells revealed that the GFP-p230(GRIP) was associated with highly dynamic tubular extensions of the TGN, which have the appearance and behaviour of transport carriers. To further define the nature of the GRIP membrane binding site, in vitro budding assays were performed using purified rat liver Golgi membranes and cytosol from GFP-p230(GRIP) transfected cells. Analysis of Golgi-derived vesicles by sucrose gradient fractionation demonstrated that GFP-p230(GRIP) binds to a specific population of vesicles distinct from those labelled for beta -COP or gamma -adaptin. The GFP-p230(GRIP) fusion protein is recruited to the same vesicle population as full-length p230, demonstrating that the GRIP domain is solely proficient as a targeting signal for membrane binding of the native molecule. Therefore, p230 GRIP is a targeting signal for recruitment to a highly selective membrane attachment site on a specific population of trans-Golgi network tubulovesicular carriers.
Resumo:
CoDeSys "Controller Development Systems" is a development environment for programming in the area of automation controllers. It is an open source solution completely in line with the international industrial standard IEC 61131-3. All five programming languages for application programming as defined in IEC 61131-3 are available in the development environment. These features give professionals greater flexibility with regard to programming and allow control engineers have the ability to program for many different applications in the languages in which they feel most comfortable. Over 200 manufacturers of devices from different industrial sectors offer intelligent automation devices with a CoDeSys programming interface. In 2006, version 3 was released with new updates and tools. One of the great innovations of the new version of CoDeSys is object oriented programming. Object oriented programming (OOP) offers great advantages to the user for example when wanting to reuse existing parts of the application or when working on one application with several developers. For this reuse can be prepared a source code with several well known parts and this is automatically generated where necessary in a project, users can improve then the time/cost/quality management. Until now in version 2 it was necessary to have hardware interface called “Eni-Server” to have access to the generated XML code. Another of the novelties of the new version is a tool called Export PLCopenXML. This tool makes it possible to export the open XML code without the need of specific hardware. This type of code has own requisites to be able to comply with the standard described above. With XML code and with the knowledge how it works it is possible to do component-oriented development of machines with modular programming in an easy way. Eplan Engineering Center (EEC) is a software tool developed by Mind8 GmbH & Co. KG that allows configuring and generating automation projects. Therefore it uses modules of PLC code. The EEC already has a library to generate code for CoDeSys version 2. For version 3 and the constant innovation of drivers by manufacturers, it is necessary to implement a new library in this software. Therefore it is important to study the XML export to be then able to design any type of machine. The purpose of this master thesis is to study the new version of the CoDeSys XML taking into account all aspects and impact on the existing CoDeSys V2 models and libraries in the company Harro Höfliger Verpackungsmaschinen GmbH. For achieve this goal a small sample named “Traffic light” in CoDeSys version 2 will be done and then, using the tools of the new version it there will be a project with version 3 and also the EEC implementation for the automatically generated code.
Resumo:
Tese apresentada para cumprimento dos requisitos necessários à obtenção do grau de Doutor em Línguas, Literaturas e Culturas
Resumo:
BACKGROUND: Sodium channel NaV1.5 underlies cardiac excitability and conduction. The last 3 residues of NaV1.5 (Ser-Ile-Val) constitute a PDZ domain-binding motif that interacts with PDZ proteins such as syntrophins and SAP97 at different locations within the cardiomyocyte, thus defining distinct pools of NaV1.5 multiprotein complexes. Here, we explored the in vivo and clinical impact of this motif through characterization of mutant mice and genetic screening of patients. METHODS AND RESULTS: To investigate in vivo the regulatory role of this motif, we generated knock-in mice lacking the SIV domain (ΔSIV). ΔSIV mice displayed reduced NaV1.5 expression and sodium current (INa), specifically at the lateral myocyte membrane, whereas NaV1.5 expression and INa at the intercalated disks were unaffected. Optical mapping of ΔSIV hearts revealed that ventricular conduction velocity was preferentially decreased in the transversal direction to myocardial fiber orientation, leading to increased anisotropy of ventricular conduction. Internalization of wild-type and ΔSIV channels was unchanged in HEK293 cells. However, the proteasome inhibitor MG132 rescued ΔSIV INa, suggesting that the SIV motif is important for regulation of NaV1.5 degradation. A missense mutation within the SIV motif (p.V2016M) was identified in a patient with Brugada syndrome. The mutation decreased NaV1.5 cell surface expression and INa when expressed in HEK293 cells. CONCLUSIONS: Our results demonstrate the in vivo significance of the PDZ domain-binding motif in the correct expression of NaV1.5 at the lateral cardiomyocyte membrane and underline the functional role of lateral NaV1.5 in ventricular conduction. Furthermore, we reveal a clinical relevance of the SIV motif in cardiac disease.
Resumo:
Cell-type-specific gene silencing is critical to understand cell functions in normal and pathological conditions, in particular in the brain where strong cellular heterogeneity exists. Molecular engineering of lentiviral vectors has been widely used to express genes of interest specifically in neurons or astrocytes. However, we show that these strategies are not suitable for astrocyte-specific gene silencing due to the processing of small hairpin RNA (shRNA) in a cell. Here we develop an indirect method based on a tetracycline-regulated system to fully restrict shRNA expression to astrocytes. The combination of Mokola-G envelope pseudotyping, glutamine synthetase promoter and two distinct microRNA target sequences provides a powerful tool for efficient and cell-type-specific gene silencing in the central nervous system. We anticipate our vector will be a potent and versatile system to improve the targeting of cell populations for fundamental as well as therapeutic applications.