925 resultados para Domain-specific languages engineering
Resumo:
Artificial Intelligence (AI) is gaining ever more ground in every sphere of human life, to the point that it is now even used to pass sentences in courts. The use of AI in the field of Law is however deemed quite controversial, as it could provide more objectivity yet entail an abuse of power as well, given that bias in algorithms behind AI may cause lack of accuracy. As a product of AI, machine translation is being increasingly used in the field of Law too in order to translate laws, judgements, contracts, etc. between different languages and different legal systems. In the legal setting of Company Law, accuracy of the content and suitability of terminology play a crucial role within a translation task, as any addition or omission of content or mistranslation of terms could entail legal consequences for companies. The purpose of the present study is to first assess which neural machine translation system between DeepL and ModernMT produces a more suitable translation from Italian into German of the atto costitutivo of an Italian s.r.l. in terms of accuracy of the content and correctness of terminology, and then to assess which translation proves to be closer to a human reference translation. In order to achieve the above-mentioned aims, two human and automatic evaluations are carried out based on the MQM taxonomy and the BLEU metric. Results of both evaluations show an overall better performance delivered by ModernMT in terms of content accuracy, suitability of terminology, and closeness to a human translation. As emerged from the MQM-based evaluation, its accuracy and terminology errors account for just 8.43% (as opposed to DeepL’s 9.22%), while it obtains an overall BLEU score of 29.14 (against DeepL’s 27.02). The overall performances however show that machines still face barriers in overcoming semantic complexity, tackling polysemy, and choosing domain-specific terminology, which suggests that the discrepancy with human translation may still be remarkable.
Resumo:
The ability to create hybrid systems that blend different paradigms has now become a requirement for complex AI systems usually made of more than a component. In this way, it is possible to exploit the advantages of each paradigm and exploit the potential of different approaches such as symbolic and non-symbolic approaches. In particular, symbolic approaches are often exploited for their efficiency, effectiveness and ability to manage large amounts of data, while symbolic approaches are exploited to ensure aspects related to explainability, fairness, and trustworthiness in general. The thesis lies in this context, in particular in the design and development of symbolic technologies that can be easily integrated and interoperable with other AI technologies. 2P-Kt is a symbolic ecosystem developed for this purpose, it provides a logic-programming (LP) engine which can be easily extended and customized to deal with specific needs. The aim of this thesis is to extend 2P-Kt to support constraint logic programming (CLP) as one of the main paradigms for solving highly combinatorial problems given a declarative problem description and a general constraint-propagation engine. A real case study concerning school timetabling is described to show a practical usage of the CLP(FD) library implemented. Since CLP represents only a particular scenario for extending LP to domain-specific scenarios, in this thesis we present also a more general framework: Labelled Prolog, extending LP with labelled terms and in particular labelled variables. The designed framework shows how it is possible to frame all variations and extensions of LP under a single language reducing the huge amount of existing languages and libraries and focusing more on how to manage different domain needs using labels which can be associated with every kind of term. Mapping of CLP into Labeled Prolog is also discussed as well as the benefits of the provided approach.
Resumo:
OBJECTIVE: This study aimed to assess the survival and life quality evolution of patients subjected to surgical excision of oral and oropharyngeal squamous cell carcinoma. MATERIAL AND METHODS: Forty-seven patients treated at a Brazilian healthcare unit specialized in head and neck surgery between 2006 and 2007 were enrolled in the study. The gathering of data comprised reviewing hospital files and applying the University of Washington Quality of Life (UW-QOL) questionnaire previously and 1 year after the surgery. Comparative analysis used Poisson regression to assess factors associated with survival and a paired t-test to compare preoperative and 1-year postoperative QOL ratings. RESULTS: 1 year after surgery, 7 patients were not found (dropout of the cohort); 15 had died and 25 fulfilled the UW-QOL again. The risk of death was associated with having regional metastasis previously to surgery (relative risk=2.18; 95% confidence interval=1.09-5.17) and tumor size T3 or T4 (RR=2.30; 95%CI=1.05-5.04). Survivors presented significantly (p<0.05) poorer overall and domain-specific ratings of quality of life. Chewing presented the largest reduction: from 74.0 before surgery to 34.0 one year later. Anxiety was the only domain whose average rating increased (from 36.0 to 70.7). CONCLUSIONS: The prospective assessment of survival and quality of life may contribute to anticipate interventions aimed at reducing the incidence of functional limitations in patients with oral and oropharyngeal cancer.
Resumo:
In this study, the effectiveness of a group-based attention and problem solving (APS) treatment approach to executive impairments in patients with frontal lobe lesions was investigated. Thirty participants with lesions in the frontal lobes, 16 with left frontal (LF) and 14 with right frontal (RF) lesions, were allocated into three groups, each with 10 participants. The APS treatment was initially compared to two other control conditions, an information/education (IE) approach and treatment-as-usual or traditional rehabilitation (TR), with each of the control groups subsequently receiving the APS intervention in a crossover design. This design allowed for an evaluation of the treatment through assessment before and after treatment and on follow up, six months later. There was an improvement on some executive and functional measures after the implementation of the APS programme in the three groups. Size, and to a lesser extent laterality, of lesion affected baseline performance on measures of executive function, but there was no apparent relationship between size, laterality or site of lesion and level of benefit from the treatment intervention. The results were discussed in terms of models of executive functioning and the effectiveness of domain specific interventions in the rehabilitation of executive dysfunction.
Resumo:
The plasma membrane of differentiated skeletal muscle fibers comprises the sarcolemma, the transverse (T) tubule network, and the neuromuscular and muscle-tendon junctions. We analyzed the organization of these domains in relation to defined surface markers, beta -dystroglycan, dystrophin, and caveolin-3, These markers were shown to exhibit highly organized arrays along the length of the fiber. Caveolin-3 and beta -dystroglycan/dystrophin showed distinct, but to some extent overlapping, labeling patterns and both markers left transverse tubule openings clear. This labeling pattern revealed microdomains over the entire plasma membrane with the exception of the neuromuscular and muscle-tendon junctions which formed distinct demarcated macrodomains. Our results suggest that the entire plasma membrane of mature muscle comprises a mosaic of T tubule domains together with sareolemmal caveolae and beta -dystroglycan domains. The domains identified with these markers were examined with respect to targeting of viral proteins and other expressed domain-specific markers, We found that each marker protein was targeted to distinct microdomains, The macrodomains were intensely labeled with all our markers. Replacing the cytoplasmic tail of the vesicular stomatitis virus glycoprotein with that of CD4 resulted in retargeting from one domain to another. The domain-specific protein distribution at the muscle cell surface may be generated by targeting pathways requiring specific sorting information but this trafficking is different from the conventional apical-basolateral division. (C) 2001 Academic Press.
Resumo:
OBJECTIVE To review studies on the readability of package leaflets of medicinal products for human use.METHODS We conducted a systematic literature review between 2008 and 2013 using the keywords “Readability and Package Leaflet” and “Readability and Package Insert” in the academic search engine Biblioteca do Conhecimento Online,comprising different bibliographic resources/databases. The preferred reporting items for systematic reviews and meta-analyses criteria were applied to prepare the draft of the report. Quantitative and qualitative original studies were included. Opinion or review studies not written in English, Portuguese, Italian, French, or Spanish were excluded.RESULTS We identified 202 studies, of which 180 were excluded and 22 were enrolled [two enrolling healthcare professionals, 10 enrolling other type of participants (including patients), three focused on adverse reactions, and 7 descriptive studies]. The package leaflets presented various readability problems, such as complex and difficult to understand texts, small font size, or few illustrations. The main methods to assess the readability of the package leaflet were usability tests or legibility formulae. Limitations with these methods included reduced number of participants; lack of readability formulas specifically validated for specific languages (e.g., Portuguese); and absence of an assessment on patients literacy, health knowledge, cognitive skills, levels of satisfaction, and opinions.CONCLUSIONS Overall, the package leaflets presented various readability problems. In this review, some methodological limitations were identified, including the participation of a limited number of patients and healthcare professionals, the absence of prior assessments of participant literacy, humor or sense of satisfaction, or the predominance of studies not based on role-plays about the use of medicines. These limitations should be avoided in future studies and be considered when interpreting the results.
Resumo:
The clinical content of administrative databases includes, among others, patient demographic characteristics, and codes for diagnoses and procedures. The data in these databases is standardized, clearly defined, readily available, less expensive than collected by other means, and normally covers hospitalizations in entire geographic areas. Although with some limitations, this data is often used to evaluate the quality of healthcare. Under these circumstances, the quality of the data, for instance, errors, or it completeness, is of central importance and should never be ignored. Both the minimization of data quality problems and a deep knowledge about this data (e.g., how to select a patient group) are important for users in order to trust and to correctly interpret results. In this paper we present, discuss and give some recommendations for some problems found in these administrative databases. We also present a simple tool that can be used to screen the quality of data through the use of domain specific data quality indicators. These indicators can significantly contribute to better data, to give steps towards a continuous increase of data quality and, certainly, to better informed decision-making.
Resumo:
Monitoring systems have traditionally been developed with rigid objectives and functionalities, and tied to specific languages, libraries and run-time environments. There is a need for more flexible monitoring systems which can be easily adapted to distinct requirements. On-line monitoring has been considered as increasingly important for observation and control of a distributed application. In this paper we discuss monitoring interfaces and architectures which support more extensible monitoring and control services. We describe our work on the development of a distributed monitoring infrastructure, and illustrate how it eases the implementation of a complex distributed debugging architecture. We also discuss several issues concerning support for tool interoperability and illustrate how the cooperation among multiple concurrent tools can ease the task of distributed debugging.
Resumo:
Atualmente, verifica-se um aumento na necessidade de software feito à medida do cliente, que se consiga adaptar de forma rápida as constantes mudanças da sua área de negócio. Cada cliente tem os seus problemas concretos que precisa de resolver, não lhe sendo muitas vezes possível dispensar uma elevada quantidade de recursos para atingir os fins pretendidos. De forma a dar resposta a estes problemas surgiram várias arquiteturas e metodologias de desenvolvimento de software, que permitem o desenvolvimento ágil de aplicações altamente configuráveis, que podem ser personalizadas por qualquer utilizador das mesmas. Este dinamismo, trazido para as aplicações sobre a forma de modelos que são personalizados pelos utilizadores e interpretados por uma plataforma genérica, cria maiores desafios no momento de realizar testes, visto existir um número de variáveis consideravelmente maior que numa aplicação com uma arquitetura tradicional. É necessário, em todos os momentos, garantir a integridade de todos os modelos, bem como da plataforma responsável pela sua interpretação, sem ser necessário o desenvolvimento constante de aplicações para suportar os testes sobre os diferentes modelos. Esta tese debruça-se sobre uma aplicação, a plataforma myMIS, que permite a interpretação de modelos orientados à gestão, escritos numa linguagem específica de domínio, sendo realizada a avaliação do estado atual e definida uma proposta de práticas de testes a aplicar no desenvolvimento da mesma. A proposta resultante desta tese permitiu verificar que, apesar das dificuldades inerentes à arquitetura da aplicação, o desenvolvimento de testes de uma forma genérica é possível, podendo as mesmas lógicas ser utilizadas para o teste de diversos modelos distintos.
Resumo:
Thesis submitted to the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia for the degree of Doctor of Philosophy in Environmental Engineering
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Na atualidade existe a necessidade de produzir novos aviões de forma rápida, eficiente e económica com o objetivo de abrir novas rotas de voo, expansão das já existentes e substituição de aviões em fim de vida útil. Neste contexto, e sem nunca deixar de cumprir os apertados requisitos do domínio que incluem a exigência de elevada qualidade, a indústria adotou a arquitetura IMA que permite executar várias aplicações aviónicas num único sistema de computação partilhado. Com esta arquitetura, o desenvolvimento do software para as aeronaves ganhou uma elevada relevância, sendo necessário gerar código automaticamente, reutilizar código já testado proveniente de outras aplicações anteriormente desenvolvidas e garantir o mais cedo possível que o software desenvolvido se encontra de acordo com os standards. Apesar da complexidade do domínio, desenvolveu-se uma ferramenta que permite produzir a estrutura do código de novas aplicações para a aviónica. Aferramenta lida com a variabilidade das diversas linhas de produto e reduz o tempo de desenvolvimento. Uma DSL poderia ser uma solução apropriada, pois permite cobrir diversos requisitos exigidos, no entanto, esta solução não é exequível porque seria necessário produzir uma linguagem para cada configuração de software aviónico diferente se pretendêssemos tirar partido da especificidade. Com esta dissertação, solucionou-se esta dificuldade com recurso à noção de família de DSLs. Trata-se de um conjunto de linguagens para um domínio específico, que apresentam um conjunto comum de conceitos chave, mas que adaptam alguns desses conceitos para cumprir a variabilidade dos requisitos. Utilizou-se a abordagem MDD para desenvolver um gerador automático de DSLs que é capaz de produzir a linguagem desejada de acordo com a configuração de software pretendida para a partição pertencente a um módulo aviónico. As linguagens geradas apresentam um nível de usabilidade adequada para o domínio, bem como têm a capacidade de validar as construções efetuadas usando a DSL e produzir os artefactos pretendidos.
Resumo:
OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario.
Resumo:
En aquest treball, s'introduiran dos de les metodologies de desenvolupament dirigides per models més significatives: Model Driven Architecture (MDA) i Domain Specific Modeling (DSM). Així mateix, es presentarà un estudi comparatiu d'algunes de les diferents eines existents actualment al mercat que els hi donen suport.
Resumo:
Background: To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results: To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions: We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development.