867 resultados para IDE, Domain specific languages, CodeMirror, Eclipse, Xtext


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broader goal of the research being described here is to automatically acquire diagnostic knowledge from documents in the domain of manual and mechanical assembly of aircraft structures. These documents are treated as a discourse used by experts to communicate with others. It therefore becomes possible to use discourse analysis to enable machine understanding of the text. The research challenge addressed in the paper is to identify documents or sections of documents that are potential sources of knowledge. In a subsequent step, domain knowledge will be extracted from these segments. The segmentation task requires partitioning the document into relevant segments and understanding the context of each segment. In discourse analysis, the division of a discourse into various segments is achieved through certain indicative clauses called cue phrases that indicate changes in the discourse context. However, in formal documents such language may not be used. Hence the use of a domain specific ontology and an assembly process model is proposed to segregate chunks of the text based on a local context. Elements of the ontology/model, and their related terms serve as indicators of current context for a segment and changes in context between segments. Local contexts are aggregated for increasingly larger segments to identify if the document (or portions of it) pertains to the topic of interest, namely, assembly. Knowledge acquired through such processes enables acquisition and reuse of knowledge during any part of the lifecycle of a product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graph algorithms have been shown to possess enough parallelism to keep several computing resources busy-even hundreds of cores on a GPU. Unfortunately, tuning their implementation for efficient execution on a particular hardware configuration of heterogeneous systems consisting of multicore CPUs and GPUs is challenging, time consuming, and error prone. To address these issues, we propose a domain-specific language (DSL), Falcon, for implementing graph algorithms that (i) abstracts the hardware, (ii) provides constructs to write explicitly parallel programs at a higher level, and (iii) can work with general algorithms that may change the graph structure (morph algorithms). We illustrate the usage of our DSL to implement local computation algorithms (that do not change the graph structure) and morph algorithms such as Delaunay mesh refinement, survey propagation, and dynamic SSSP on GPU and multicore CPUs. Using a set of benchmark graphs, we illustrate that the generated code performs close to the state-of-the-art hand-tuned implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic classification using machine learning continues to be an active research area. The majority of work in this area uses off-the-shelf machine learning tools and treats them as black-box classifiers. This approach turns all the modelling complexity into a feature selection problem. In this paper, we build a problem-specific solution to the traffic classification problem by designing a custom probabilistic graphical model. Graphical models are a modular framework to design classifiers which incorporate domain-specific knowledge. More specifically, our solution introduces semi-supervised learning which means we learn from both labelled and unlabelled traffic flows. We show that our solution performs competitively compared to previous approaches while using less data and simpler features. Copyright © 2010 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional software development captures the user needs during the requirement analysis. The Web makes this endeavour even harder due to the difficulty to determine who these users are. In an attempt to tackle the heterogeneity of the user base, Web Personalization techniques are proposed to guide the users’ experience. In addition, Open Innovation allows organisations to look beyond their internal resources to develop new products or improve existing processes. This thesis sits in between by introducing Open Personalization as a means to incorporate actors other than webmasters in the personalization of web applications. The aim is to provide the technological basis that builds up a trusty environment for webmasters and companion actors to collaborate, i.e. "an architecture of participation". Such architecture very much depends on these actors’ profile. This work tackles three profiles (i.e. software partners, hobby programmers and end users), and proposes three "architectures of participation" tuned for each profile. Each architecture rests on different technologies: a .NET annotation library based on Inversion of Control for software partners, a Modding Interface in JavaScript for hobby programmers, and finally, a domain specific language for end-users. Proof-of-concept implementations are available for the three cases while a quantitative evaluation is conducted for the domain specific language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Little is known about how sitting time, alone or in combination with markers of physical activity (PA), influences mental well-being and work productivity. Given the need to develop workplace PA interventions that target employees' health related efficiency outcomes; this study examined the associations between self-reported sitting time, PA, mental well-being and work productivity in office employees. Methods: Descriptive cross-sectional study. Spanish university office employees (n = 557) completed a survey measuring socio-demographics, total and domain specific (work and travel) self-reported sitting time, PA (International Physical Activity Questionnaire short version), mental well-being (Warwick-Edinburg Mental Well-Being Scale) and work productivity (Work Limitations Questionnaire). Multivariate linear regression analyses determined associations between the main variables adjusted for gender, age, body mass index and occupation. PA levels (low, moderate and high) were introduced into the model to examine interactive associations. Results: Higher volumes of PA were related to higher mental well-being, work productivity and spending less time sitting at work, throughout the working day and travelling during the week, including the weekends (p < 0.05). Greater levels of sitting during weekends was associated with lower mental well-being (p < 0.05). Similarly, more sitting while travelling at weekends was linked to lower work productivity (p < 0.05). In highly active employees, higher sitting times on work days and occupational sitting were associated with decreased mental well-being (p < 0.05). Higher sitting times while travelling on weekend days was also linked to lower work productivity in the highly active (p < 0.05). No significant associations were observed in low active employees. Conclusions: Employees' PA levels exerts different influences on the associations between sitting time, mental well-being and work productivity. The specific associations and the broad sweep of evidence in the current study suggest that workplace PA strategies to improve the mental well-being and productivity of all employees should focus on reducing sitting time alongside efforts to increase PA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese de doutorado parte da perspectiva inicial de que a gramaticalização se restringe a tratado sobre itens lexicais ou discursivos que se tornam itens gramaticais (o que a enquadraria dentro da Teoria da Variação, inserta esta dentro da Pesquisa Sociolinguística), mas segue em direção a um salto epistemológico que remodele aquela perspectiva, ampliando-a a patamar do qual ela pode ser observada como teoria autônoma, investigativa de fenômenos limítrofes e nem sempre discretos entre linguagem e língua, discurso e texto, descrição e prescrição, oralidade e escrita, léxico e gramática. Desse modo, propugna-se pela visão epistemológica do tema, conduzido, até aqui, de modo puramente ontológico, circunscrito a um (e apenas um) dos muitos espectros que se podem alcançar com a aludida ampliação àquele que vem sendo perquirido como tratado, porém que, segundo se pretende demonstrar, pode e deve ser expandido à malha de uma teoria geral, qual seja a Teoria Geral da Gramaticalização: trata-se, aqui, de seu objetivo geral. Para esse propósito, vale-se a tese de filósofos da linguagem que atuaram sobre essa faculdade ou capacidade humana de forma direta ou indireta desde os seus primórdios ocidentais (como Sócrates, Platão e Aristóteles), passando pelos pensadores mais incisivamente preocupados com os aspectos cognitivos e interativos da linguagem e da língua (como Hegel, Husserl, Saussure, Sapir, Bloomfield, Wittgenstein, Derrida, Chomsky, Labov, Charaudeau, Maingueneau, Ducrot, Coseriu), além de ser necessária a incursão à Gramaticografia mais estrita (como a empreendida por Dionísio da Trácia, Varrão, Arnault e Lancelot, Nebrija, Jerônimo Soares Barbosa, Eduardo Carlos Pereira, Said Ali, Bechara), e, naturalmente, a contribuição filosófica dos pesquisadores sobre a gramaticalização (como Meillet, Vendryès, Bréal, Kurilowicz, Traugott, Heine, Hopper, Lehmann). Uma vez que se tenha mostrado ser verossímil aceitar-se a gramaticalização como teoria autônoma, esta tese pretende legar-lhe o papel instrumental de metodologia auxiliar a muitas entre as que ora se empreendem quando se trata de pesquisas em campos cuja ocupação é a linguagem e a língua: trata-se, aqui, de seu objetivo específico. Para essa duplicidade de metas ou objetivos, será necessário compreender conceitos, categorias e protótipos oriundos da Filosofia da Ciência (Epistemologia), do contraste entre ciências da linguagem e outros ramos do saber, da imersão em Gramaticologia e Gramaticografia (e, em alguns aspectos, em Gramatização e Gramatologia) referentes à Língua Portuguesa, da defesa, enfim, de que o ensino da Gramática Formal (ou Normativa) do idioma privilegia a acepção reflexiva e ativa (plena) dos usos ou atos a que a linguagem só pode chegar por meio do domínio da língua em toda a sua tessitura epistemológica, que gera comunicação e expressividade, raciocínio e emotividade, indo da concretude do discurso ou da oralidade à abstração da entidade pouco ou nada material, que, por sua vez, é mais nitidamente representada pela escrita, seu estágio por assim dizer de forma ainda mais pura, conquanto não excludente da substancialidade com que dialoga de modo incessante no seu constante e dialético passado-futuro ou diversidade-homogeneidade (tese e antítese) de onde emerge o seu presente ou a sua unidade (síntese)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work applies a variety of multilinear function factorisation techniques to extract appropriate features or attributes from high dimensional multivariate time series for classification. Recently, a great deal of work has centred around designing time series classifiers using more and more complex feature extraction and machine learning schemes. This paper argues that complex learners and domain specific feature extraction schemes of this type are not necessarily needed for time series classification, as excellent classification results can be obtained by simply applying a number of existing matrix factorisation or linear projection techniques, which are simple and computationally inexpensive. We highlight this using a geometric separability measure and classification accuracies obtained though experiments on four different high dimensional multivariate time series datasets. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by the design and development challenges of the BART case study, an approach for developing and analyzing a formal model for reactive systems is presented. The approach makes use of a domain specific language for specifying control algorithms able to satisfy competing properties such as safety and optimality. The domain language, called SPC, offers several key abstractions such as the state, the profile, and the constraint to facilitate problem specification. Using a high-level program transformation system such as HATS being developed at the University of Nebraska at Omaha, specifications in this modelling language can be transformed to ML code. The resulting executable specification can be further refined by applying generic transformations to the abstractions provided by the domain language. Problem dependent transformations utilizing the domain specific knowledge and properties may also be applied. The result is a significantly more efficient implementation which can be used for simulation and gaining deeper insight into design decisions and various control policies. The correctness of transformations can be established using a rewrite-rule based induction theorem prover Rewrite Rule Laboratory developed at the University of New Mexico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

提出一种应用软件开发的新方法 ,称为“用户工程”.这是一种基于构件化软件系统结构的用户主导的面向领域的应用软件开发方法 ,强调用户在应用软件开发中的主导作用 ,试图将应用软件的开发过程变成用户详细定义过程 ,而不仅仅是传统的编程过程 .它为越来越多的应用软件开发需求提供了可能有效的一个途径

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most important kinds of queries in Spatial Network Databases (SNDB) to support location-based services (LBS) is the shortest path query. Given an object in a network, e.g. a location of a car on a road network, and a set of objects of interests, e.g. hotels,gas station, and car, the shortest path query returns the shortest path from the query object to interested objects. The studies of shortest path query have two kinds of ways, online processing and preprocessing. The studies of preprocessing suppose that the interest objects are static. This paper proposes a shortest path algorithm with a set of index structures to support the situation of moving objects. This algorithm can transform a dynamic problem to a static problem. In this paper we focus on road networks. However, our algorithms do not use any domain specific information, and therefore can be applied to any network. This algorithm’s complexity is O(klog2 i), and traditional Dijkstra’s complexity is O((i + k)2).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

如何认识和了解用户并最终让用户在适当的工具和环境的帮助引导下直接参与到需求分析活动中,可能正是解决软件生产过程中一系列问题的突破口。用户可视为对象、角色或智能体,充分发挥其能动性,对于软件需求的正确性、一致性和完整性大有裨益;同时,把软件专业人员从繁琐的需求分析活动中解放出来,可大大缩短软件的开发周期。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

针对领域内数据实体变化频繁,复用性低的问题,提出了特定领域数据参考模型的概念,它基于概念模型的形式,对领域内的通用数据模型进行说明和描述,成为领域内应用系统数据建模的基础.给出了领域数据参考模型的体系架构,它对整个模型进行了纵横向的划分以便作为不同程度的复用的基础.在概念模型构建时,提出了数据模型构建步骤,并引入了"维度","维度层次"和"事实"3个数据仓库中的概念,扩充了ER图中的属性定义,为构建稳定可复用的领域实体提供了有效的途径.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a debate in cognitive development theory on whether cognitive development is general or specific. More and more researchers think that cognitive development is domain specific. People start to investigate preschoolers' native theory of human being's basic knowledge systems. Naive biology is one of the core domains. But there is argument whether there is separate native biological concepts among preschoolers. The research examined preschoolers' cognitive development of naive biological theory on two levels which is "growth" and "aliveness", and it also examined individual difference and factors that lead to the difference. Three studies were designed. Study 1 was to study preschoolers' cognition on growth, which is a basic trait of living things, and whether children can distinguish living and non-living things with the trait and understanding the causality. Study 2 was to investigate preschoolers' distinction between living things and non-living things from an integrated level. Study 3 was to investigate how children make inferences to unfamiliar things with their domain specific knowledge. The results showed the following: 1. Preschoolers gradually developed naive theory of biology on growth level, but their naive theory on integrated level has not developed. 2 Preschoolers' naive theory of biology is not "all or none", 4- and 5-year-old children showed some distinction between living and non-living things to some extent, they use non-intentional reason to explain the cause of growth and their explanation showed coherence. But growth has not been a criteria of ontological distinction of living and non-living things for 4- and 5-year-old children, most 6-year-old children can distinguish between living and non-living things, and these show the developing process of biological cognition. 3. Preschoolers' biological inference is influenced by their domain-specific knowledge, whether they can make inference to new trait of living things depends on whether they have specific knowledge. In the deductive task, children use their knowledge to make inference to unfamiliar things. 4-year-olds use concrete knowledge more often while the 6-year-old use generalized knowledge more frequency. 4. Preschoolers' knowledge grow with age, but individuals' cognitive development speed at different period. Urban and rural educational background affect cognitive performance. As time goes by, the urban-rural knowledge difference to distinguish living and nonliving things reduces. And preschoolers' are at the same developmental stage because the three age groups have similar causal explanation both in quantity and quality. 5. There is intra-individual difference on preschoolers' naive biological cognition. They show different performance on different tasks and domains, and their cognitive development is sequential, they understand growth earlier than they understand "alive", which is an integrated concept. The intra-individual differences decrease with age.