993 resultados para reference models
Avaliação de uma técnica para geração de modelos digitais de superfície utilizando múltiplas imagens
Resumo:
The efficient generation of digital surface model (DSM) from optical images has been explored for many years and the results are dependent on the project characteristics (image resolution, size of overlap between images, among others), of the image matching techniques and the computer capabilities for the image processing. The points generated from image matching have a direct impact on the quality of the DSM and, consequently, influence the need for the costly step of edition. This work aims at assessing experimentally a technique for DSM generation by matching of multiple images (two or more) simultaneously using the vertical line locus method (VLL). The experiments were performed with six images of the urban area of Presidente Prudente/SP, with a ground sample distance (GSD) of approximately 7cm. DSMs of a small area with homogeneous texture, repetitive pattern, moving objects including shadows and trees were generated to assess the quality of the developed procedure. This obtained DSM was compared to cloud points acquired by LASER (Light Amplification by Simulated Emission of Radiation) scanning as wells as with a DSM generated by Leica Photogrammetric Suite (LPS) software. The accomplished results showed that the MDS generated by the implemented technique has a geometric quality compatible with the reference models.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We investigate the strong magnetic and gravity anomalies of the Goias Alkaline Province (GAP), a region of Late Cretaceous alkaline magmatism along the northern border of the Parana Basin, Brazil. The alkaline complexes (eight of which are present in outcrops, two others inferred from magnetic signals) are characterized by a series of small intrusions forming almost circular magnetic and gravimetric anomalies varying from -4000 to +6000 nT and from -10 to +40 mGal, respectively. We used the Aneuler method and Analytical Signal Amplitude to obtain depth and geometry for mapped sources from the magnetic anomaly data. These results were used as the reference models in the 3D gravity inversion. The 3D inversion results show that the alkaline intrusions have depths of 10-12 km. The intrusions in the northern GAP follow two alignments and have different sizes. In the anomaly magnetic map, dominant guidelines correlate strongly with the extensional regimes that correlate with the rise of alkaline magmatism. The emplacement of these intrusions marks mechanical discontinuities and zones of weakness in the upper crust. According to the 3D inversion results, those intrusions are located within the upper crust (from the surface to 18 km depth) and have spheres as the preferable geometry. Such spherical shapes are more consistent with magmatic chambers instead of plug intrusions. The Registro do Araguaia anomaly (similar to 15 by 25 km) has a particular magnetic signature that indicates that the top is deeper than 1500 m. North of this circular anomaly are lineaments with structural indices indicating contacts on their edges and dikes/sills in the interiors. Results of 3D inversion of magnetic and gravity data suggest that the Registro do Araguaia is the largest body in the area, reaching 18 km depth and indicating a circular layered structure. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.
Resumo:
The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.
Resumo:
The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".
Resumo:
El Trabajo de Fin de Grado aborda el tema del Descubrimiento de Conocimiento en series numéricas temporales, abordando el análisis de las mismas desde el punto de vista de la semántica de las series. La gran mayoría de trabajos realizados hasta la fecha en el campo del análisis de series temporales proponen el análisis numérico de los valores de la serie, lo que permite obtener buenos resultados pero no ofrece la posibilidad de formular las conclusiones de forma que se puedan justificar e interpretar los resultados obtenidos. Por ello, en este trabajo se pretende crear una aplicación que permita realizar el análisis de las series temporales desde un punto de vista cualitativo, en contraposición al tradicional método cuantitativo. De esta forma, quedarán recogidos todos los elementos relevantes de la serie temporal que puedan servir de estudio en un futuro. Para abordar el objetivo propuesto se plantea un mecanismo para extraer de la serie temporal la información que resulta de interés para su análisis. Para poder hacerlo, primero se formaliza el conjunto de comportamientos relevantes del dominio, que serán los símbolos a mostrar en la salida de la aplicación. Así, el método que se ha diseñado e implementado transformará una serie temporal numérica en una secuencia simbólica que recoge toda la semántica de la serie temporal de partida y resulta más intuitiva y fácil de interpretar. Una vez que se dispone de un mecanismo para transformar las series numéricas en secuencias simbólicas, se pueden plantear todas las tareas de análisis sobre dichas secuencias de símbolos. En este trabajo, aunque no se entra en este post-análisis de estas series, sí se plantean distintos campos en los que se puede avanzar en el futuro. Por ejemplo, se podría hacer una medida de la similitud entre dos secuencias simbólicas como punto de partida para la tarea de comparación o la creación de modelos de referencia para análisis posteriores de las series temporales. ---ABSTRACT---This Final-year Project deals with the topic of Knowledge Discovery in numerical time series, addressing time series analysis from the viewpoint of the semantics of the series. Most of the research conducted to date in the field of time series analysis recommends analysing the values of the series numerically. This provides good results but prevents the conclusions from being formulated to allow justification and interpretation of the results. Thus, the purpose of this project is to create an application that allows the analysis of time series, from a qualitative point of view rather than a quantitative one. This way, all the relevant elements of the time series will be gathered for future studies. The design of a mechanism to extract the information that is of interest from the time series is the first step towards achieving the proposed objective. To do this, all the key behaviours in the domain are set, which will be the symbols shown in the output. The designed and implemented method transforms a numerical time series into a symbolic sequence that takes in all the semantics of the original time series and is more intuitive and easier to interpret. Once a mechanism for transforming the numerical series into symbolic sequences is created, the symbolic sequences are ready for analysis. Although this project does not cover a post-analysis of these series, it proposes different fields in which research can be done in the future. For instance, comparing two different sequences to measure the similarities between them, or the creation of reference models for further analysis of time series.
Resumo:
The solutions to cope with new challenges that societies have to face nowadays involve providing smarter daily systems. To achieve this, technology has to evolve and leverage physical systems automatic interactions, with less human intervention. Technological paradigms like Internet of Things (IoT) and Cyber-Physical Systems (CPS) are providing reference models, architectures, approaches and tools that are to support cross-domain solutions. Thus, CPS based solutions will be applied in different application domains like e-Health, Smart Grid, Smart Transportation and so on, to assure the expected response from a complex system that relies on the smooth interaction and cooperation of diverse networked physical systems. The Wireless Sensors Networks (WSN) are a well-known wireless technology that are part of large CPS. The WSN aims at monitoring a physical system, object, (e.g., the environmental condition of a cargo container), and relaying data to the targeted processing element. The WSN communication reliability, as well as a restrained energy consumption, are expected features in a WSN. This paper shows the results obtained in a real WSN deployment, based on SunSPOT nodes, which carries out a fuzzy based control strategy to improve energy consumption while keeping communication reliability and computational resources usage among boundaries.
Resumo:
En este artículo abordamos el significado de la familia adoptiva a partir del análisis del discurso de los relatos autobiográficos de madres y padres adoptivos españoles. En un contexto de vacío de cultura adoptiva, las familias adoptivas publican narraciones para ser valoradas como «normales» al tiempo que, en ausencia de modelos de referencia, definen su modelo de familia desdibujando el arquetipo familiar instituido. A partir del método biográfico, aplicamos un doble ejercicio sociológico de (1) deconstrucción ideológica del modelo de familia hegemónico a partir de la (2) construcción del significado que padres y madres adoptivas otorgan a su familia. Las teorías de la familia postmoderna y las teorías feministas postestructuralistas enmarcan el análisis crítico del discurso con perspectiva de género con el que es abordado el estudio de estos singulares documentos personales.
Resumo:
Purpose – The purpose of this paper is to outline a seven-phase simulation conceptual modelling procedure that incorporates existing practice and embeds a process reference model (i.e. SCOR). Design/methodology/approach – An extensive review of the simulation and SCM literature identifies a set of requirements for a domain-specific conceptual modelling procedure. The associated design issues for each requirement are discussed and the utility of SCOR in the process of conceptual modelling is demonstrated using two development cases. Ten key concepts are synthesised and aligned to a general process for conceptual modelling. Further work is outlined to detail, refine and test the procedure with different process reference models in different industrial contexts. Findings - Simulation conceptual modelling is often regarded as the most important yet least understood aspect of a simulation project (Robinson, 2008a). Even today, there has been little research development into guidelines to aid in the creation of a conceptual model. Design issues are discussed for building an ‘effective’ conceptual model and the domain-specific requirements for modelling supply chains are addressed. The ten key concepts are incorporated to aid in describing the supply chain problem (i.e. components and relationships that need to be included in the model), model content (i.e. rules for determining the simplest model boundary and level of detail to implement the model) and model validation. Originality/value – Paper addresses Robinson (2008a) call for research in defining and developing new approaches for conceptual modelling and Manuj et al., (2009) discussion on improving the rigour of simulation studies in SCM. It is expected that more detailed guidelines will yield benefits to both expert (i.e. avert typical modelling failures) and novice modellers (i.e. guided practice; less reliance on hopeful intuition)
Resumo:
The literature on preferences for redistribution has paid little attention to the effect of social mobility on the demand for redistribution, which is in contrast with the literature on class-voting, where studies on the effect of social mobility has been very common. Some works have addressed this issue but no systematic test of the hypotheses connecting social mobility and preferences has been done. In this paper we use the diagonal reference models to estimate the effect of origin and destination class on preferences for redistribution in a sample of European countries using data from the European Social Survey. Our findings indicate that social origin matters to a little extent to explain preferences, as newcomers tend to adopt the preferences of the destination class. Moreover, we have found only limited evidence supporting the acculturation hypothesis and not support for the status maximization hypothesis. Furthermore, the effect of social origin varies largely between countries. In a second step of the analysis we investigate what are the national factors explaining this variation. The empirical evidence we present leads to conclude that high rates of upward social mobility sharply reduce the effect of social origin on preferences for redistribution
Resumo:
La ricerca prende in esame la produzione della stampa periodica bibliografica italiana nel corso del Seicento e Settecento. Da un lato mira a ricostruirne il percorso storico attraverso la raccolta, la selezione e l’analisi delle principali testimonianze; dall’altro a indagarne le diverse forme e fisionomie assunte nel corso del tempo, nonché le modalità attraverso le quali fu somministrata la notitia librorum. A questo primo piano di indagine se ne è affiancato un secondo, per mezzo dell’elaborazione di due modelli descrittivi. Il primo è finalizzato alla raccolta delle principali generalità ed evidenze formali di una testata. Il secondo, invece, rappresenta un tentativo di spoglio e analisi approfondita dei contributi offerti da due campioni periodici presi come modelli di riferimento: La Galleria di Minerva, relativamente al biennio 1696-1697, e il Giornale della letteratura italiana (Mantova, 1793-1795). L’intento è quello di ricostruire, anche attraverso un processo di formulazione di keywords, le principali tematiche e i principali interessi emersi dalle esperienze menzionate. E mostrare, pertanto, il valore rappresentativo e identificativo del periodico bibliografico relativamente al contesto erudito di riferimento, nella sua veste di fonte informativa all’interno della quale si rispecchiarono le principali istanze scientifico-culturali del periodo.
Resumo:
Nowadays, a significant increase on the demand for interoperable systems for exchanging data in business collaborative environments has been noticed. Consequently, cooperation agreements between each of the involved enterprises have been brought to light. However, due to the fact that even in a same community or domain, there is a big variety of knowledge representation not semantically coincident, which embodies the existence of interoperability problems in the enterprises information systems that need to be addressed. Moreover, in relation to this, most organizations face other problems about their information systems, as: 1) domain knowledge not being easily accessible by all the stakeholders (even intra-enterprise); 2) domain knowledge not being represented in a standard format; 3) and even if it is available in a standard format, it is not supported by semantic annotations or described using a common and understandable lexicon. This dissertation proposes an approach for the establishment of an enterprise reference lexicon from business models. It addresses the automation in the information models mapping for the reference lexicon construction. It aggregates a formal and conceptual representation of the business domain, with a clear definition of the used lexicon to facilitate an overall understanding by all the involved stakeholders, including non-IT personnel.