930 resultados para Domain-specific analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho propõe metodologias para detectar a presença e localizar um intruso em ambientes indoor, 2-D e 3-D, sendo que neste último, utiliza-se um sistema cooperativo de antenas e, em ambos os casos, o sistema é baseado em radares multiestáticos. Para obter uma alta resolução, o radar opera com pulsos UWB, que possuem amplitude espectral máxima em 1 GHz para ambientes 2-D e, pulsos de banda larga com frequências entre 200 MHz e 500 MHz para ambientes 3-D. A estimativa de localização, para os ambientes bidimensionais, é feita pela técnica de otimização Enxame de Partículas - PSO (Particle Swarm Optimization), pelo método de Newton com eliminação de Gauss e pelo método dos mínimos quadrados com eliminação de Gauss. Para o ambiente tridimensional, foi desenvolvida uma metodologia vetorial que estima uma possível região de localização do intruso. Para a simulação das ondas eletromagnéticas se utiliza o método numérico FDTD (Diferenças Finitas no Domínio do Tempo) associado à técnica de absorção UPML (Uniaxial Perfectly Matched Layer) com o objetivo de truncar o domínio de análise simulando uma propagação ao infinito. Para a análise do ambiente em 2-D foi desenvolvido o ACOR-UWB-2-D e para o ambiente 3-D foi utilizado o software LANE SAGS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Geografia - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The style of life in the cities deprives the man of the contact with nature. As a way to escape from the turbulent reality of the urban system, the urban planning provides areas that allow the contact between man and nature like botanical gardens, parks, leafy squares, etc. These places, through specific analysis, are also called green areas. Thus, the main purpose of this research is to outline and analyze, through geoprocessing techniques, green areas of Americana, São Paulo considering them as indicators of urban environmental quality. This evaluation will be based on the methodology adapted by Bargos (2010) where products of remote sensing were used (aerial photos and satellite images) and it will be also based on field work and calculation of the amount of green areas. The software ESRI ArcGIS® will be used to create thematic maps connected with city’s green areas. Based on the results achieved in this study, it’s expected to contribute to the government of the city of Americana in order to provide benefits of analysis for decision makers in the context of urban planning aiming an improvement in the urban environmental quality, thus benefiting its entire population

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing degradation of the environment and the depletion of natural resources, caused by indiscriminate production practices, by current and unlimited human needs and also misconceptions that natural resources are inexhaustible, make current and future environment situation, a constant concern of national and world leaders. With the purpose to confront these issues and to qualify the cities on the environmental situation in the state of São Paulo, the Department of the Environment created, in 2007, the Program “Município VerdeAzul. This paper aims to the study this program of Unit Water Resources Management Pontal Paranapanema through specific analysis of data representing the efficiency of the program directives and a comparative study between two cities unit with discrepant scores and a survey of the possible causes of obtaining low scores for municipalities participating in the project. For such a method was used that consisted of a bibliographical survey related to the main environmental liabilities Brazilians... (Complete abstract click electronic access below)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Software Engineering originated with the motivation to mass produce components for increased productivity in production systems. Since its origins, numerous studies have been proposed on the subject as new features in the creation of systems, like the Object- Oriented Programming and Aspect-Oriented Programming, have been established and methodologies have been developed to control them efficiently. However, years of studies in the area were not sufficient to create a methodology for reusing software artifacts really efficient and easy enough to be widespread. Given this, the Model-Driven Development (MDD) is trying to promote it using the modeling of systems as a reference, becoming part of it and establishing a huge productivity gain. One of his approaches is called Model-Driven Software Development (MDSD), which focuses on improving the practices and systems development using Domain-Specific Languages (DSL) for this purpose. In this Final Paper, Xtext is used as a tool to prove the productivity and efficiency of this approach, and for that bibliographic studies were made on the approach and the tool, and show the methodology and a case study to demonstrate results and conclusions regarding this work

Relevância:

80.00% 80.00%

Publicador:

Resumo:

TThis work deals with present how to increase the accuracy of productivity indicators, aiming to increase the information accuracy of performance indicators and propose improvements to the process in question, more specifically to improve the visualization of information from these indicators for all hierarchical levels of the company, and then make possible use them to assist in the processes of decision making and planning of the production process. We start with an analysis of the current process to be studied seeking sources of information losses during the production process. Afterwards, a specific analysis of the points considered critical, so alternatives are raised for improvements to these points. This project has some specific tools and methodologies that guide the development of work which are required of any project carried out in the company

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A cDNA coding for a digestive cathepsin L, denominated Sl-CathL, was isolated from a cDNA library of Sphenophorus levis larvae, representing the most abundant EST (10.49%) responsible for proteolysis in the midgut. The open reading frame of 972 bp encodes a preproenzyme similar to midgut cathepsin L-like enzymes in other coleopterans. Recombinant Sl-CathL was expressed in Pichia pastoris, with molecular mass of about 42 kDa. The recombinant protein was catalytically activated at low pH and the mature enzyme of 39 kDa displayed thermal instability and maximal activity at 37 degrees C and pH 6.0. Immunocytochemical analysis revealed Sl-CathL production in the midgut epithelium and secretion from vesicles containing the enzyme into the gut lumen, confirming an important role for this enzyme in the digestion of the insect larvae. The expression profile identified by RT-PCR through the biological cycle indicates that Sl-CathL is mainly produced in larval stages, with peak expression in 30-day-old larvae. At this stage, the enzyme is 1250-fold more expressed than in the pupal fase, in which the lowest expression level is detected. This enzyme is also produced in the adult stage, albeit in lesser abundance, assuming the presence of a different array of enzymes in the digestive system of adults. Tissue-specific analysis revealed that Sl-CathL mRNA synthesis occurs fundamentally in the larval midgut, thereby confirming its function as a digestive enzyme, as detected in immunolocalization assays. The catalytic efficiency of the purified recombinant enzyme was calculated using different substrates (Z-Leu-Arg-AMC, Z-Arg-Arg-AMC and Z-Phe-Arg-AMC) and rSl-CathL exhibited hydrolysis preference for Z-Leu-Arg-AMC (k(cat)/K-m = 37.53 mM S-1), which is similar to other insect cathepsin L-like enzymes. rSl-CathL activity inhibition assays were performed using four recombinant sugarcane cystatins. rSl-CathL was strongly inhibited by recombinant cystatin CaneCPI-4 (K-i = 0.196 nM), indicating that this protease is a potential target for pest control. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results: We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions: The use of an established and well-known graphical language in the development of biomedical ontologies provides a more intuitive form of capturing and representing knowledge than using only text-based notations. The use of the profile requires the domain expert to reason about the underlying semantics of the concepts and relationships being modeled, which helps preventing the introduction of inconsistencies in an ontology under development and facilitates the identification and correction of errors in an already defined ontology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: to adapt and validate the Patient Expectations and Satisfaction with Prenatal Care instrument for use in Brazil. It contains 41 items divided into two dimensions: expectations and satisfaction. The adapted version was submitted to analysis for stability, convergent construct validity, and internal consistency (Cronbach’s alpha) for distinct groups and dimensions. Method: 119 pregnant women receiving prenatal care were interviewed and 26 of these women answered the instrument twice (retest). Internal consistency was appropriate (Cronbach’s alpha ≥ 0.70); test-retest presented strong correlation (r=0.82; p<0.001) for the domain expectations and moderate correlation (r=0.66; p<0.001) for the satisfaction domain. The analysis confirmed that the instrument’s adapted version is valid in the studied group. Results: there is strong evidence for the validity and reliability of the instrument’s adaptation. Conclusion: the instrument needs to be tested in groups of pregnant women with different social characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.