951 resultados para Domain-specific languages


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this report we structurally and functionally define a binding domain that is involved in protein association and that we have designated EH (for Eps15 homology domain). This domain was identified in the tyrosine kinase substrate Eps15 on the basis of regional conservation with several heterogeneous proteins of yeast and nematode. The EH domain spans about 70 amino acids and shows approximately 60% overall amino acid conservation. We demonstrated the ability of the EH domain to specifically bind cytosolic proteins in normal and malignant cells of mesenchymal, epithelial, and hematopoietic origin. These observations prompted our search for additional EH-containing proteins in mammalian cells. Using an EH domain-specific probe derived from the eps15 cDNA, we cloned and characterized a cDNA encoding an EH-containing protein with overall similarity to Eps15; we designated this protein Eps15r (for Eps15-related). Structural comparison of Eps15 and Eps15r defines a family of signal transducers possessing extensive networking abilities including EH-mediated binding and association with Src homology 3-containing proteins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developers commonly ask detailed and domain-specific questions about the software systems they are developing and maintaining. Integrated development environments (IDEs) form an essential category of tools for developing software that should support software engineering decision making. Unfortunately, rigid and generic IDEs that focus on low-level programming tasks, that promote code rather than data, and that suppress customization, offer limited support for informed decision making during software development. We propose to improve decision making within IDEs by moving from generic to context-aware IDEs through moldable tools. In this paper, we promote the idea of moldable tools, illustrate it with concrete examples, and discuss future research directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new principled domain independent watermarking framework is presented. The new approach is based on embedding the message in statistically independent sources of the covertext to mimimise covertext distortion, maximise the information embedding rate and improve the method's robustness against various attacks. Experiments comparing the performance of the new approach, on several standard attacks show the current proposed approach to be competitive with other state of the art domain-specific methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Existing semantic search tools have been primarily designed to enhance the performance of traditional search technologies but with little support for ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by providing several means to hide the complexity of semantic search from end users and thus make it easy to use and effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Translators wishing to work on translating specialised texts are traditionally recommended to spend much time and effort acquiring specialist knowledge of the domain involved, and for some areas of specialised activity, this is clearly essential. For other types of translation-based, domain-specific of communication, however, it is possible to develop a systematic approach to the task which will allow for the production of target texts which are adequate for purpose, in a range of specialised domains, without necessarily having formal qualifications in those areas. For Esselink (2000) translation agencies, and individual clients, would tend to prefer a subject expert who also happens to have competence in one or more languages over a trained translator with a high degree of translation competence, including the ability to deal with specialised translation tasks. The problem, for the would-be translator, is persuading prospective clients that he or she is capable of this. This paper will offer an overview of the principles used to design training intended to teach trainee translators how to use a systematic approach to specialised translation, in order to extend the range of areas in which they can tackle translation, without compromising quality or reliability. This approach will be described within the context of the functionalist approach developed in particular by Reiss and Vermeer (1984), Nord (1991, 1997) inter alia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Localisation is the process of taking a product and adapting it to fit the culture in question. This usually involves making it both linguistically and culturally appropriate for the target audience. While there are many areas in video game translations where localisation holds a factor, this study will focus on localisation changes in the personalities of fictional characters between the original Japanese version and the English localised version of the video game Final Fantasy XIV: A Realm Reborn and its expansion Heavensward for PC, PS3 and PS4. With this in mind, specific examples are examined using Satoshi Kinsui's work on yakuwarigo, role language as the main framework for this study. Five non-playable characters were profiled and had each of their dialogues transcribed for a comparative analysis. This included the original Japanese text, the officially localised English text and a translation of the original Japanese text done by myself. Each character were also given a short summary and a reasoned speculation on why these localisation changes might have occurred. The result shows that there were instances where some translations had been deliberately adjusted to ensure that the content did not cause any problematic issues to players overseas. This could be reasoned out that some of the Japanese role languages displayed by characters in this game could potentially cause dispute among the western audience. In conclusion, the study shows that localisation can be a difficult process that not only requires a translator's knowledge of the source and target language, but also display some creativity in writing ability to ensure that players will have a comparable experience without causing a rift in the fanbase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Artificial Intelligence (AI) is gaining ever more ground in every sphere of human life, to the point that it is now even used to pass sentences in courts. The use of AI in the field of Law is however deemed quite controversial, as it could provide more objectivity yet entail an abuse of power as well, given that bias in algorithms behind AI may cause lack of accuracy. As a product of AI, machine translation is being increasingly used in the field of Law too in order to translate laws, judgements, contracts, etc. between different languages and different legal systems. In the legal setting of Company Law, accuracy of the content and suitability of terminology play a crucial role within a translation task, as any addition or omission of content or mistranslation of terms could entail legal consequences for companies. The purpose of the present study is to first assess which neural machine translation system between DeepL and ModernMT produces a more suitable translation from Italian into German of the atto costitutivo of an Italian s.r.l. in terms of accuracy of the content and correctness of terminology, and then to assess which translation proves to be closer to a human reference translation. In order to achieve the above-mentioned aims, two human and automatic evaluations are carried out based on the MQM taxonomy and the BLEU metric. Results of both evaluations show an overall better performance delivered by ModernMT in terms of content accuracy, suitability of terminology, and closeness to a human translation. As emerged from the MQM-based evaluation, its accuracy and terminology errors account for just 8.43% (as opposed to DeepL’s 9.22%), while it obtains an overall BLEU score of 29.14 (against DeepL’s 27.02). The overall performances however show that machines still face barriers in overcoming semantic complexity, tackling polysemy, and choosing domain-specific terminology, which suggests that the discrepancy with human translation may still be remarkable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to create hybrid systems that blend different paradigms has now become a requirement for complex AI systems usually made of more than a component. In this way, it is possible to exploit the advantages of each paradigm and exploit the potential of different approaches such as symbolic and non-symbolic approaches. In particular, symbolic approaches are often exploited for their efficiency, effectiveness and ability to manage large amounts of data, while symbolic approaches are exploited to ensure aspects related to explainability, fairness, and trustworthiness in general. The thesis lies in this context, in particular in the design and development of symbolic technologies that can be easily integrated and interoperable with other AI technologies. 2P-Kt is a symbolic ecosystem developed for this purpose, it provides a logic-programming (LP) engine which can be easily extended and customized to deal with specific needs. The aim of this thesis is to extend 2P-Kt to support constraint logic programming (CLP) as one of the main paradigms for solving highly combinatorial problems given a declarative problem description and a general constraint-propagation engine. A real case study concerning school timetabling is described to show a practical usage of the CLP(FD) library implemented. Since CLP represents only a particular scenario for extending LP to domain-specific scenarios, in this thesis we present also a more general framework: Labelled Prolog, extending LP with labelled terms and in particular labelled variables. The designed framework shows how it is possible to frame all variations and extensions of LP under a single language reducing the huge amount of existing languages and libraries and focusing more on how to manage different domain needs using labels which can be associated with every kind of term. Mapping of CLP into Labeled Prolog is also discussed as well as the benefits of the provided approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: This study aimed to assess the survival and life quality evolution of patients subjected to surgical excision of oral and oropharyngeal squamous cell carcinoma. MATERIAL AND METHODS: Forty-seven patients treated at a Brazilian healthcare unit specialized in head and neck surgery between 2006 and 2007 were enrolled in the study. The gathering of data comprised reviewing hospital files and applying the University of Washington Quality of Life (UW-QOL) questionnaire previously and 1 year after the surgery. Comparative analysis used Poisson regression to assess factors associated with survival and a paired t-test to compare preoperative and 1-year postoperative QOL ratings. RESULTS: 1 year after surgery, 7 patients were not found (dropout of the cohort); 15 had died and 25 fulfilled the UW-QOL again. The risk of death was associated with having regional metastasis previously to surgery (relative risk=2.18; 95% confidence interval=1.09-5.17) and tumor size T3 or T4 (RR=2.30; 95%CI=1.05-5.04). Survivors presented significantly (p<0.05) poorer overall and domain-specific ratings of quality of life. Chewing presented the largest reduction: from 74.0 before surgery to 34.0 one year later. Anxiety was the only domain whose average rating increased (from 36.0 to 70.7). CONCLUSIONS: The prospective assessment of survival and quality of life may contribute to anticipate interventions aimed at reducing the incidence of functional limitations in patients with oral and oropharyngeal cancer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, the effectiveness of a group-based attention and problem solving (APS) treatment approach to executive impairments in patients with frontal lobe lesions was investigated. Thirty participants with lesions in the frontal lobes, 16 with left frontal (LF) and 14 with right frontal (RF) lesions, were allocated into three groups, each with 10 participants. The APS treatment was initially compared to two other control conditions, an information/education (IE) approach and treatment-as-usual or traditional rehabilitation (TR), with each of the control groups subsequently receiving the APS intervention in a crossover design. This design allowed for an evaluation of the treatment through assessment before and after treatment and on follow up, six months later. There was an improvement on some executive and functional measures after the implementation of the APS programme in the three groups. Size, and to a lesser extent laterality, of lesion affected baseline performance on measures of executive function, but there was no apparent relationship between size, laterality or site of lesion and level of benefit from the treatment intervention. The results were discussed in terms of models of executive functioning and the effectiveness of domain specific interventions in the rehabilitation of executive dysfunction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The plasma membrane of differentiated skeletal muscle fibers comprises the sarcolemma, the transverse (T) tubule network, and the neuromuscular and muscle-tendon junctions. We analyzed the organization of these domains in relation to defined surface markers, beta -dystroglycan, dystrophin, and caveolin-3, These markers were shown to exhibit highly organized arrays along the length of the fiber. Caveolin-3 and beta -dystroglycan/dystrophin showed distinct, but to some extent overlapping, labeling patterns and both markers left transverse tubule openings clear. This labeling pattern revealed microdomains over the entire plasma membrane with the exception of the neuromuscular and muscle-tendon junctions which formed distinct demarcated macrodomains. Our results suggest that the entire plasma membrane of mature muscle comprises a mosaic of T tubule domains together with sareolemmal caveolae and beta -dystroglycan domains. The domains identified with these markers were examined with respect to targeting of viral proteins and other expressed domain-specific markers, We found that each marker protein was targeted to distinct microdomains, The macrodomains were intensely labeled with all our markers. Replacing the cytoplasmic tail of the vesicular stomatitis virus glycoprotein with that of CD4 resulted in retargeting from one domain to another. The domain-specific protein distribution at the muscle cell surface may be generated by targeting pathways requiring specific sorting information but this trafficking is different from the conventional apical-basolateral division. (C) 2001 Academic Press.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To review studies on the readability of package leaflets of medicinal products for human use.METHODS We conducted a systematic literature review between 2008 and 2013 using the keywords “Readability and Package Leaflet” and “Readability and Package Insert” in the academic search engine Biblioteca do Conhecimento Online,comprising different bibliographic resources/databases. The preferred reporting items for systematic reviews and meta-analyses criteria were applied to prepare the draft of the report. Quantitative and qualitative original studies were included. Opinion or review studies not written in English, Portuguese, Italian, French, or Spanish were excluded.RESULTS We identified 202 studies, of which 180 were excluded and 22 were enrolled [two enrolling healthcare professionals, 10 enrolling other type of participants (including patients), three focused on adverse reactions, and 7 descriptive studies]. The package leaflets presented various readability problems, such as complex and difficult to understand texts, small font size, or few illustrations. The main methods to assess the readability of the package leaflet were usability tests or legibility formulae. Limitations with these methods included reduced number of participants; lack of readability formulas specifically validated for specific languages (e.g., Portuguese); and absence of an assessment on patients literacy, health knowledge, cognitive skills, levels of satisfaction, and opinions.CONCLUSIONS Overall, the package leaflets presented various readability problems. In this review, some methodological limitations were identified, including the participation of a limited number of patients and healthcare professionals, the absence of prior assessments of participant literacy, humor or sense of satisfaction, or the predominance of studies not based on role-plays about the use of medicines. These limitations should be avoided in future studies and be considered when interpreting the results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The clinical content of administrative databases includes, among others, patient demographic characteristics, and codes for diagnoses and procedures. The data in these databases is standardized, clearly defined, readily available, less expensive than collected by other means, and normally covers hospitalizations in entire geographic areas. Although with some limitations, this data is often used to evaluate the quality of healthcare. Under these circumstances, the quality of the data, for instance, errors, or it completeness, is of central importance and should never be ignored. Both the minimization of data quality problems and a deep knowledge about this data (e.g., how to select a patient group) are important for users in order to trust and to correctly interpret results. In this paper we present, discuss and give some recommendations for some problems found in these administrative databases. We also present a simple tool that can be used to screen the quality of data through the use of domain specific data quality indicators. These indicators can significantly contribute to better data, to give steps towards a continuous increase of data quality and, certainly, to better informed decision-making.