831 resultados para Domain-specific analysis
Resumo:
Most stencil computations allow tile-wise concurrent start, i.e., there always exists a face of the iteration space and a set of tiling hyperplanes such that all tiles along that face can be started concurrently. This provides load balance and maximizes parallelism. However, existing automatic tiling frameworks often choose hyperplanes that lead to pipelined start-up and load imbalance. We address this issue with a new tiling technique that ensures concurrent start-up as well as perfect load-balance whenever possible. We first provide necessary and sufficient conditions on tiling hyperplanes to enable concurrent start for programs with affine data accesses. We then provide an approach to find such hyperplanes. Experimental evaluation on a 12-core Intel Westmere shows that our code is able to outperform a tuned domain-specific stencil code generator by 4% to 27%, and previous compiler techniques by a factor of 2x to 10.14x.
Resumo:
Identification of viral encoded proteins that interact with RNA-dependent RNA polymerase (RdRp) is an important step towards unraveling the mechanism of replication. Sesbania mosaic virus (SeMV) RdRp was shown to interact strongly with p10 domain of polyprotein 2a and moderately with the protease domain. Mutational analysis suggested that the C-terminal disordered domain of RdRp is involved in the interaction with p10. Coexpression of full length RdRp and p10 resulted in formation of RdRp-p10 complex which showed significantly higher polymerase activity than RdRp alone. Interestingly, C Delta 43 RdRp also showed a similar increase in activity. Thus, p10 acts as a positive regulator of RdRp by interacting with the C-terminal disordered domain of RdRp. (C) 2014 The Authors. Published by Elsevier B.V.
Resumo:
The broader goal of the research being described here is to automatically acquire diagnostic knowledge from documents in the domain of manual and mechanical assembly of aircraft structures. These documents are treated as a discourse used by experts to communicate with others. It therefore becomes possible to use discourse analysis to enable machine understanding of the text. The research challenge addressed in the paper is to identify documents or sections of documents that are potential sources of knowledge. In a subsequent step, domain knowledge will be extracted from these segments. The segmentation task requires partitioning the document into relevant segments and understanding the context of each segment. In discourse analysis, the division of a discourse into various segments is achieved through certain indicative clauses called cue phrases that indicate changes in the discourse context. However, in formal documents such language may not be used. Hence the use of a domain specific ontology and an assembly process model is proposed to segregate chunks of the text based on a local context. Elements of the ontology/model, and their related terms serve as indicators of current context for a segment and changes in context between segments. Local contexts are aggregated for increasingly larger segments to identify if the document (or portions of it) pertains to the topic of interest, namely, assembly. Knowledge acquired through such processes enables acquisition and reuse of knowledge during any part of the lifecycle of a product.
Resumo:
Traditional software development captures the user needs during the requirement analysis. The Web makes this endeavour even harder due to the difficulty to determine who these users are. In an attempt to tackle the heterogeneity of the user base, Web Personalization techniques are proposed to guide the users’ experience. In addition, Open Innovation allows organisations to look beyond their internal resources to develop new products or improve existing processes. This thesis sits in between by introducing Open Personalization as a means to incorporate actors other than webmasters in the personalization of web applications. The aim is to provide the technological basis that builds up a trusty environment for webmasters and companion actors to collaborate, i.e. "an architecture of participation". Such architecture very much depends on these actors’ profile. This work tackles three profiles (i.e. software partners, hobby programmers and end users), and proposes three "architectures of participation" tuned for each profile. Each architecture rests on different technologies: a .NET annotation library based on Inversion of Control for software partners, a Modding Interface in JavaScript for hobby programmers, and finally, a domain specific language for end-users. Proof-of-concept implementations are available for the three cases while a quantitative evaluation is conducted for the domain specific language.
Resumo:
Dynamin-Related Protein 1 (Drp1), a large GTPase of the dynamin superfamily, is required for mitochondrial fission in healthy and apoptotic cells. Drp1 activation is a complex process that involves translocation from the cytosol to the mitochondrial outer membrane (MOM) and assembly into rings/spirals at the MOM, leading to membrane constriction/division. Similar to dynamins, Drp1 contains GTPase (G), bundle signaling element (BSE) and stalk domains. However, instead of the lipid-interacting Pleckstrin Homology (PH) domain present in the dynamins, Drp1 contains the so-called B insert or variable domain that has been suggested to play an important role in Drp1 regulation. Different proteins have been implicated in Drp1 recruitment to the MOM, although how MOM-localized Drp1 acquires its fully functional status remains poorly understood. We found that Drp1 can interact with pure lipid bilayers enriched in the mitochondrion-specific phospholipid cardiolipin (CL). Building on our previous study, we now explore the specificity and functional consequences of this interaction. We show that a four lysine module located within the B insert of Drp1 interacts preferentially with CL over other anionic lipids. This interaction dramatically enhances Drp1 oligomerization and assembly-stimulated GTP hydrolysis. Our results add significantly to a growing body of evidence indicating that CL is an important regulator of many essential mitochondrial functions.
Resumo:
如何认识和了解用户并最终让用户在适当的工具和环境的帮助引导下直接参与到需求分析活动中,可能正是解决软件生产过程中一系列问题的突破口。用户可视为对象、角色或智能体,充分发挥其能动性,对于软件需求的正确性、一致性和完整性大有裨益;同时,把软件专业人员从繁琐的需求分析活动中解放出来,可大大缩短软件的开发周期。
Resumo:
Toll-like receptors (TLRs) are an ancient family of pattern recognition receptors, which show homology with the Drosophila Toll protein and play key roles in detecting various non-self substances and then initiating and activating immune system. In this report, the full length of the first bivalve TLR (named as CfToll-1) is presented. CfToll-1 was originally identified as an EST (expressed sequence tag) fragment from a cDNA library of Zhikong scallop (Chlamys farreri). Its complete sequence was obtained by the construction of Genome Walker library and 5' RACE (rapid amplification of cDNA end) techniques. The full length cDNA of CfToll-1 consisted of 4308 nucleotides with a polyA tail, encoding a putative protein of 1198 amino acids with a 5' UTR (untranslated region) of 211 bp and a 3'UTR of 500 bp. The predicted amino acid sequence comprised an extracellular domain with a potential signal peptide, nineteen leucine-rich repeats (LRR), two LRR-C-terminal (LRRCT) motifs, and a LRR-N-terminal (LRRNT), followed by a transmembrane segment of 20 amino acids, and a cytoplasmic region of 138 amino acids containing the Toll/IL-1R domain (TIR). The deduced amino acid sequence of CfToll-1 was homologous to Drosophila melanogaster Tolls (DmTolls) with 23-35% similarity in the full length amino acids sequence and 30-54% in the TIR domain. Phylogenetic analysis of CfToll-1 with other known TLRs revealed that CfToll-1 was closely related to DmTolls. An analysis of the tissue-specific expression of the CfToll-1 gene by Real-time PCR showed that the transcripts were constitutively expressed in tissues of haemocyte, muscle, mantle, heart, gonad and gill. The temporal expressions of CfToll-1 in the mixed primary cultured haemocytes were observed after the haemocytes were treated with 1 mu g ml(-1) and 100 ng ml(-1) lipopolysaccharide (LPS), respectively. The expression of CfToll-1 was up-regulated and increased about 2-fold at 6 h with the treatment of 1 mu g ml(-1) LPS. The expression of CfToll-1 was down-regulated with the treatment of 100 ng ml(-1) LPS. The results indicated that the expression of CfToll-1 could be regulated by LPS, and this regulation was dose-dependent. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Srinivasan, A., King, R. D. and Bain, M.E. (2003) An Empirical Study of the Use of Relevance Information in Inductive Logic Programming. Journal of Machine Learning Research. 4(Jul):369-383
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
Resumo:
NetSketch is a tool that enables the specification of network-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system so as to retain sufficient enough details to enable future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis approach based on a strongly-typed, Domain-Specific Language (DSL) to specify network configurations at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we overview NetSketch, highlight its salient features, and illustrate how it could be used in applications, including the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications). In a companion paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity.
Resumo:
NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system while retaining sufficient information about it to carry out future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis. The compositional analysis is based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity. In a companion paper [6], we overview NetSketch, highlight its salient features, and illustrate how it could be used in two applications: the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications).
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [30] we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. In this report we evaluate our proposed design criteria by utilizing within the context of novel research a formal reasoning system that is designed according to these criteria. In particular, we consider how the design and capabilities of the formal reasoning system that we employ influence, aid, or hinder our ability to accomplish a formal reasoning task – the assembly of a machine-verifiable proof pertaining to the NetSketch formalism. NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. It provides capabilities for compositional analysis based on a strongly-typed domain-specific language (DSL) for describing and reasoning about constrained-flow networks and invariants that need to be enforced thereupon. In a companion paper [13] we overview NetSketch, highlight its salient features, and illustrate how it could be used in actual applications. In this paper, we define using a machine-readable syntax major parts of the formal system underlying the operation of NetSketch, along with its semantics and a corresponding notion of validity. We then provide a proof of soundness for the formalism that can be partially verified using a lightweight formal reasoning system that simulates natural contexts. A traditional presentation of these definitions and arguments can be found in the full report on the NetSketch formalism [12].
Resumo:
Although it is known that brain regions in one hemisphere may interact very closely with their corresponding contralateral regions (collaboration) or operate relatively independent of them (segregation), the specific brain regions (where) and conditions (how) associated with collaboration or segregation are largely unknown. We investigated these issues using a split field-matching task in which participants matched the meaning of words or the visual features of faces presented to the same (unilateral) or to different (bilateral) visual fields. Matching difficulty was manipulated by varying the semantic similarity of words or the visual similarity of faces. We assessed the white matter using the fractional anisotropy (FA) measure provided by diffusion tensor imaging (DTI) and cross-hemispheric communication in terms of fMRI-based connectivity between homotopic pairs of cortical regions. For both perceptual and semantic matching, bilateral trials became faster than unilateral trials as difficulty increased (bilateral processing advantage, BPA). The study yielded three novel findings. First, whereas FA in anterior corpus callosum (genu) correlated with word-matching BPA, FA in posterior corpus callosum (splenium-occipital) correlated with face-matching BPA. Second, as matching difficulty intensified, cross-hemispheric functional connectivity (CFC) increased in domain-general frontopolar cortex (for both word and face matching) but decreased in domain-specific ventral temporal lobe regions (temporal pole for word matching and fusiform gyrus for face matching). Last, a mediation analysis linking DTI and fMRI data showed that CFC mediated the effect of callosal FA on BPA. These findings clarify the mechanisms by which the hemispheres interact to perform complex cognitive tasks.
Resumo:
A periodic finite-difference time-domain (FDTD) analysis is presented and applied for the first time in the study of a two-dimensional (2-D) leaky-wave planar antenna based on dipole frequency selective surfaces (FSSs). First, the effect of certain aspects of the FDTD modeling in the modal analysis of complex waves is studied in detail. Then, the FDTD model is used for the dispersion analysis of the antenna of interest. The calculated values of the leaky-wave attenuation constants suggest that, for an antenna of this type and moderate length, a significant amount of power reaches the edges of the antenna, and thus diffraction can play an important role. To test the validity of our dispersion analysis, measured radiation patterns of a fabricated prototype are presented and compared with those predicted by a leaky-wave approach based on the periodic FDTD results.