831 resultados para Domain-specific analysis
Resumo:
针对领域内数据实体变化频繁,复用性低的问题,提出了特定领域数据参考模型的概念,它基于概念模型的形式,对领域内的通用数据模型进行说明和描述,成为领域内应用系统数据建模的基础.给出了领域数据参考模型的体系架构,它对整个模型进行了纵横向的划分以便作为不同程度的复用的基础.在概念模型构建时,提出了数据模型构建步骤,并引入了"维度","维度层次"和"事实"3个数据仓库中的概念,扩充了ER图中的属性定义,为构建稳定可复用的领域实体提供了有效的途径.
Resumo:
A method for the specific determination of cobalt based on reversed-phase liquid chromatography with amperometric detection via on-column complex formation has been developed. A water-soluble chelating agent, 1-(2-pyridylazo)-2-naphthol-6-sulphonic acid (PAN-6S), is added to the mobile phase and aqueous cobalt solutions are injected directly into the column to form in situ the cobalt-PAN-6S chelate, which is then separated from other metal PAN-6S chelates and subjected to reductive amperometric detection at a moderate potential of -0.3 V. Because the procedure eliminates the interference of oxygen and depresses the electrochemical reduction of the mobile phase-containing ligand PAN-6S, by virtue of the quasi:reversible electrode process of the cobalt-PAN-6S complex, a low detection limit of 0.06 ng can be readily obtained. Interference effects were examined for sixteen common metal species, and at a 5- to 8000-fold excess by mass no obvious interference was observed. The feasibility of the method as an approach to the specific analysis of cobalt in a hair sample has been demonstrated.
Resumo:
We have cloned and characterized a cDNA encoding a putative ETS transcription factor, designated Cf-ets. The Cf-ets encodes a 406 amino acid protein containing a conserved ETS domain and a Pointed domain. Phylogenetic analysis revealed that Cf-ets belongs to the ESE group of ETS transcription factor family. Real-time PCR analysis of Cf-ets expression in adult sea scallop tissues revealed that Cf-ets was expressed mainly in gill and hemocytes, in a constitutive manner. Cf-ets mRNA level in hemocytes increased drastically after microbial challenge indicated its indispensable role in the anti-infection process. Simultaneously, the circulating hemocyte number decreased. In mammals, most ETS transcription factors play indispensable roles in blood cell differentiation and linage commitment during hematopoisis. Cf-ets is therefore likely to be a potential biomarker for hematopoiesis studies in scallops. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
There is a debate in cognitive development theory on whether cognitive development is general or specific. More and more researchers think that cognitive development is domain specific. People start to investigate preschoolers' native theory of human being's basic knowledge systems. Naive biology is one of the core domains. But there is argument whether there is separate native biological concepts among preschoolers. The research examined preschoolers' cognitive development of naive biological theory on two levels which is "growth" and "aliveness", and it also examined individual difference and factors that lead to the difference. Three studies were designed. Study 1 was to study preschoolers' cognition on growth, which is a basic trait of living things, and whether children can distinguish living and non-living things with the trait and understanding the causality. Study 2 was to investigate preschoolers' distinction between living things and non-living things from an integrated level. Study 3 was to investigate how children make inferences to unfamiliar things with their domain specific knowledge. The results showed the following: 1. Preschoolers gradually developed naive theory of biology on growth level, but their naive theory on integrated level has not developed. 2 Preschoolers' naive theory of biology is not "all or none", 4- and 5-year-old children showed some distinction between living and non-living things to some extent, they use non-intentional reason to explain the cause of growth and their explanation showed coherence. But growth has not been a criteria of ontological distinction of living and non-living things for 4- and 5-year-old children, most 6-year-old children can distinguish between living and non-living things, and these show the developing process of biological cognition. 3. Preschoolers' biological inference is influenced by their domain-specific knowledge, whether they can make inference to new trait of living things depends on whether they have specific knowledge. In the deductive task, children use their knowledge to make inference to unfamiliar things. 4-year-olds use concrete knowledge more often while the 6-year-old use generalized knowledge more frequency. 4. Preschoolers' knowledge grow with age, but individuals' cognitive development speed at different period. Urban and rural educational background affect cognitive performance. As time goes by, the urban-rural knowledge difference to distinguish living and nonliving things reduces. And preschoolers' are at the same developmental stage because the three age groups have similar causal explanation both in quantity and quality. 5. There is intra-individual difference on preschoolers' naive biological cognition. They show different performance on different tasks and domains, and their cognitive development is sequential, they understand growth earlier than they understand "alive", which is an integrated concept. The intra-individual differences decrease with age.
Resumo:
Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau Licenciada em Terapêutica da Fala
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Acção Humanitária, Cooperação e Desenvolvimento
Resumo:
Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.
Resumo:
In the framework of iBench research project, our previous work created a domain specific language TRAFFIC [6] that facilitates specification, programming, and maintenance of distributed applications over a network. It allows safety property to be formalized in terms of types and subtyping relations. Extending upon our previous work, we add Hindley-Milner style polymorphism [8] with constraints [9] to the type system of TRAFFIC. This allows a programmer to use for-all quantifier to describe types of network components, escalating power and expressiveness of types to a new level that was not possible before with propositional subtyping relations. Furthermore, we design our type system with a pluggable constraint system, so it can adapt to different application needs while maintaining soundness. In this paper, we show the soundness of the type system, which is not syntax-directed but is easier to do typing derivation. We show that there is an equivalent syntax-directed type system, which is what a type checker program would implement to verify the safety of a network flow. This is followed by discussion on several constraint systems: polymorphism with subtyping constraints, Linear Programming, and Constraint Handling Rules (CHR) [3]. Finally, we provide some examples to illustrate workings of these constraint systems.
Resumo:
Structural Health Monitoring (SHM) is an integral part of infrastructure maintenance and management systems due to socio-economic, safety and security reasons. The behaviour of a structure under vibration depends on structure characteristics. The change of structure characteristics may suggest the change in system behaviour due to the presence of damage(s) within. Therefore the consistent, output signal guided, and system dependable markers would be convenient tool for the online monitoring, the maintenance, rehabilitation strategies, and optimized decision making policies as required by the engineers, owners, managers, and the users from both safety and serviceability aspects. SHM has a very significant advantage over traditional investigations where tangible and intangible costs of a very high degree are often incurred due to the disruption of service. Additionally, SHM through bridge-vehicle interaction opens up opportunities for continuous tracking of the condition of the structure. Research in this area is still in initial stage and is extremely promising. This PhD focuses on using bridge-vehicle interaction response for SHM of damaged or deteriorating bridges to monitor or assess them under operating conditions. In the present study, a number of damage detection markers have been investigated and proposed in order to identify the existence, location, and the extent of an open crack in the structure. The theoretical and experimental investigation has been conducted on Single Degree of Freedom linear system, simply supported beams. The novel Delay Vector Variance (DVV) methodology has been employed for characterization of structural behaviour by time-domain response analysis. Also, the analysis of responses of actual bridges using DVV method has been for the first time employed for this kind of investigation.
Resumo:
This thesis is concerned with inductive charging of electric vehicle batteries. Rectified power form the 50/60 Hz utility feeds a dc-ac converter which delivers high-frequency ac power to the electric vehicle inductive coupling inlet. The inlet configuration has been defined by the Society of Automotive Engineers in Recommended Practice J-1773. This thesis studies converter topologies related to the series resonant converter. When coupled to the vehicle inlet, the frequency-controlled series-resonant converter results in a capacitively-filtered series-parallel LCLC (SP-LCLC) resonant converter topology with zero voltage switching and many other desirable features. A novel time-domain transformation analysis, termed Modal Analysis, is developed, using a state variable transformation, to analyze and characterize this multi-resonant fourth-orderconverter. Next, Fundamental Mode Approximation (FMA) Analysis, based on a voltage-source model of the load, and its novel extension, Rectifier-Compensated FMA (RCFMA) Analysis, are developed and applied to the SP-LCLC converter. The RCFMA Analysis is a simpler and more intuitive analysis than the Modal Analysis, and provides a relatively accurate closed-form solution for the converter behavior. Phase control of the SP-LCLC converter is investigated as a control option. FMA and RCFMA Analyses are used for detailed characterization. The analyses identify areas of operation, which are also validated experimentally, where it is advantageous to phase control the converter. A novel hybrid control scheme is proposed which integrates frequency and phase control and achieves reduced operating frequency range and improved partial-load efficiency. The phase-controlled SP-LCLC converter can also be configured with a parallel load and is an excellent option for the application. The resulting topology implements soft-switching over the entire load range and has high full-load and partial-load efficiencies. RCFMA Analysis is used to analyze and characterize the new converter topology, and good correlation is shown with experimental results. Finally, a novel single-stage power-factor-corrected ac-dc converter is introduced, which uses the current-source characteristic of the SP-LCLC topology to provide power factor correction over a wide output power range from zero to full load. This converter exhibits all the advantageous characteristics of its dc-dc counterpart, with a reduced parts count and cost. Simulation and experimental results verify the operation of the new converter.
Resumo:
The aim of this work is to improve retrieval and navigation services on bibliographic data held in digital libraries. This paper presents the design and implementation of OntoBib¸ an ontology-based bibliographic database system that adopts ontology-driven search in its retrieval. The presented work exemplifies how a digital library of bibliographic data can be managed using Semantic Web technologies and how utilizing the domain specific knowledge improves both search efficiency and navigation of web information and document retrieval.
Resumo:
In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.
Resumo:
Annotation of programs using embedded Domain-Specific Languages (embedded DSLs), such as the program annotation facility for the Java programming language, is a well-known practice in computer science. In this paper we argue for and propose a specialized approach for the usage of embedded Domain-Specific Modelling Languages (embedded DSMLs) in Model-Driven Engineering (MDE) processes that in particular supports automated many-step model transformation chains. It can happen that information defined at some point, using an embedded DSML, is not required in the next immediate transformation step, but in a later one. We propose a new approach of model annotation enabling flexible many-step transformation chains. The approach utilizes a combination of embedded DSMLs, trace models and a megamodel. We demonstrate our approach based on an example MDE process and an industrial case study.
Resumo:
A new domain-specific, reconfigurable system-on-a-chip (SoC) architecture is proposed for video motion estimation. This has been designed to cover most of the common block-based video coding standards, including MPEG-2, MPEG-4, H.264, WMV-9 and AVS. The architecture exhibits simple control, high throughput and relatively low hardware cost when compared with existing circuits. It can also easily handle flexible search ranges without any increase in silicon area and can be configured prior to the start of the motion estimation process for a specific standard. The computational rates achieved make the circuit suitable for high-end video processing applications, such as HDTV. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards. Indeed, the cost/performance achieved exceeds that of existing but specific solutions and greatly exceeds that of general purpose field programmable gate array (FPGA) designs.
Resumo:
Nearly all psychological research on basic cognitive processes of category formation and reasoning uses sample populations associated with large research institutions in technologically-advanced societies. Lopsided attention to a select participant pool risks biasing interpretation, no matter how large the sample or how statistically reliable the results. The experiments in this article address this limitation. Earlier research with urban-USA children suggests that biological concepts are (1) thoroughly enmeshed with their notions of naive psychology, and (2) strikingly human-centered. Thus, if children are to develop a causally appropriate model of biology, in which humans are seen as simply one animal among many, they must undergo fundamental conceptual change. Such change supposedly occurs between 7 and 10 years of age, when the human-centered view is discarded. The experiments reported here with Yukatek Maya speakers challenge the empirical generality and theoretical importance of these claims. Part 1 shows that young Maya children do not anthropocentrically interpret the biological world. The anthropocentric bias of American children appears to owe to a lack of cultural familiarity with non-human biological kinds, not to initial causal understanding of folkbiology as such. Part 2 shows that by age of 4-5 (the earliest age tested in this regard) Yukatek Maya children employ a concept of innate species potential or underlying essence much as urban American children seem to, namely, as an inferential framework for understanding the affiliation of an organism to a biological species, and for projecting known and unknown biological properties to organisms in the face of uncertainty. Together, these experiments indicate that folkpsychology cannot be the initial source of folkbiology. They also underscore the possibility of a species-wide and domain-specific basis for acquiring knowledge about the living world that is constrained and modified but not caused or created by prior nonbiological thinking and subsequent cultural experience.