919 resultados para Criptografia de dados (Computação)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
As plantas aquáticas têm papel fundamental no equilíbrio dos ecossistemas, porém seu crescimento desequilibrado pode obstruir canais, represas e reservatórios e afetar múltiplos usos da água. em relação a plantas aquáticas submersas, a utilização de medidas de controle torna-se mais complexa, em face da dificuldade em mapear e quantificar volumetricamente as áreas colonizadas. Nessas situações, considera-se que o uso de dados hidroacústicos possibilite o mapeamento e a mensuração dessas áreas, auxiliando na elaboração de propostas de manejo sustentáveis desse tipo de vegetação aquática. Assim, o presente trabalho utilizou dados acústicos e a técnica de krigagem para realizar a inferência espacial do biovolume de plantas aquáticas submersas. Os dados foram obtidos em três levantamentos ecobatimétricos realizados em uma área de estudos localizada no rio Paraná, caracterizada por condições favoráveis para proliferação de vegetação aquática submersa e dificuldade de navegação. Para delimitar as áreas caracterizadas pela presença de plantas aquáticas submersas, utilizou-se uma imagem multiespectral de alta resolução espacial World View-2. O mapeamento do biovolume das plantas aquáticas submersas nas áreas de ocorrência do fenômeno foi realizado a partir da inferência do biovolume por krigagem e do fatiamento dos valores inferidos em intervalos de 15%. A partir do mapa gerado, foi possível identificar os locais de maior concentração de macrófitas submersas, com predominância de valores de biovolume entre 15-30% e 30-45%, confirmando a viabilidade da utilização da krigagem na inferência espacial do biovolume, a partir de medidas ecobatimétricas georreferenciadas e com o suporte de imagem de alta resolução espacial.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper proposes a methodology for automatic extraction of building roof contours from a Digital Elevation Model (DEM), which is generated through the regularization of an available laser point cloud. The methodology is based on two steps. First, in order to detect high objects (buildings, trees etc.), the DEM is segmented through a recursive splitting technique and a Bayesian merging technique. The recursive splitting technique uses the quadtree structure for subdividing the DEM into homogeneous regions. In order to minimize the fragmentation, which is commonly observed in the results of the recursive splitting segmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. The high object polygons are extracted by using vectorization and polygonization techniques. Second, the building roof contours are identified among all high objects extracted previously. Taking into account some roof properties and some feature measurements (e. g., area, rectangularity, and angles between principal axes of the roofs), an energy function was developed based on the Markov Random Field (MRF) model. The solution of this function is a polygon set corresponding to building roof contours and is found by using a minimization technique, like the Simulated Annealing (SA) algorithm. Experiments carried out with laser scanning DEM's showed that the methodology works properly, as it delivered roof contours with approximately 90% shape accuracy and no false positive was verified.
Resumo:
This research presents a methodology for prediction of building shadows cast on urban roads existing on high-resolution aerial imagery. Shadow elements can be used in the modeling of contextual information, whose use has become more and more common in image analysis complex processes. The proposed methodology consists in three sequential steps. First, the building roof contours are manually extracted from an intensity image generated by the transformation of a digital elevation model (DEM) obtained from airborne laser scanning data. In similarly, the roadside contours are extracted, now from the radiometric information of the laser scanning data. Second, the roof contour polygons are projected onto the adjacent roads by using the parallel projection straight lines, whose directions are computed from the solar ephemeris, which depends on the aerial image acquisition time. Finally, parts of shadow polygons that are free from building perspective obstructions are determined, given rise to new shadow polygons. The results obtained in the experimental evaluation of the methodology showed that the method works properly, since it allowed the prediction of shadow in high-resolution imagery with high accuracy and reliability.
Resumo:
In this paper, a methodology is proposed for the geometric refinement of laser scanning building roof contours using high-resolution aerial images and Markov Random Field (MRF) models. The proposed methodology takes for granted that the 3D description of each building roof reconstructed from the laser scanning data (i.e., a polyhedron) is topologically correct and that it is only necessary to improve its accuracy. Since roof ridges are accurately extracted from laser scanning data, our main objective is to use high-resolution aerial images to improve the accuracy of roof outlines. In order to meet this goal, the available roof contours are first projected onto the image-space. After that, the projected polygons and the straight lines extracted from the image are used to establish an MRF description, which is based on relations ( relative length, proximity, and orientation) between the two sets of straight lines. The energy function associated with the MRF is minimized by using a modified version of the brute force algorithm, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding laser scanning polygon projected onto the image-space. The preliminary results showed that the proposed methodology is promising, since most sides of the refined polygons are geometrically better than corresponding projected laser scanning straight lines.
Resumo:
In this work a method is proposed to allow the indirect orientation of images using photogrammetric control extracted through integration of data derived from Photogrammetry and Light Detection and Ranging (LiDAR) system. The photogrammetric control is obtained by using an inverse photogrammetric model, which allows the projection of image space straight lines onto the object space. This mathematical model is developed based on the intersection between the collinearity-based straight line and a DSM of region, derived from LiDAR data. The mathematical model used in the indirect orientation of the image is known as the model of equivalent t planes. This mathematical model is based on the equivalence between the vector normal to the projection plane in the image space and to the vector normal to the rotated projection plane in the object space. The goal of this work is to verify the quality, efficiency and potential of photogrammetric control straight lines obtained with proposed method applied to the indirect orientation of images. The quality of generated photogrammetric control was statistically available and the results showed that proposed method is promising and it has potential for the indirect orientation of images.
Resumo:
This thesis studies the use of argumentation as a discursive element in digital media, particularly blogs. We analyzed the Blog "Fatos e Dados" [Facts and Data], created by Petrobras in the context of allegations of corruption that culminated in the installation of a Parliamentary Commission of Inquiry to investigate the company within the Congress. We intend to understand the influence that the discursive elements triggered by argumentation exercise in blogs and about themes scheduling. To this end, we work with notions of argumentation in dialogue with questions of language and discourse from the work of Charaudeau (2006), Citelli (2007), Perelman & Olbrechts-Tyteca (2005), Foucault (2007, 2008a), Bakhtin (2006) and Breton (2003). We also observe our subject from the perspective of social representations, where we seek to clarify concepts such as public image and the use of representations as argumentative elements, considering the work of Moscovici (2007). We also consider reflections about hypertext and the context of cyberculture, with authors such as Levy (1993, 1999, 2003), Castells (2003) and Chartier (1999 and 2002), and issues of discourse analysis, especially in Orlandi (1988, 1989, 1996 and 2001), as well as Foucault (2008b). We analyzed 118 posts published in the first 30 days of existence of the blog "Fatos e Dados" (between 2 June and 1 July 2009), and analyzed in detail the top ten. A corporate blog aims to defend the points of view and public image of the organization, and, therefore, uses elements of social representations to build their arguments. It goes beyond the blog, as the main news criteria, including the posts we reviewed, the credibility of Petrobras as the source of information. In the posts analyzed, the news values of innovation and relevance also arise. The controversy between the Blog and the press resulted from an inadequacy and lack of preparation of media to deal with a corporate blog that was able to explore the characteristics of liberation of the emission pole in cyberculture. The Blog is a discursive manifestation in a concrete historical situation, whose understanding and attribution of meaning takes place from the social relations between subjects that, most of the time, place themselves in discursive and ideological dispute between each other - this dispute also affects the movements of reading and reading production. We conclude that intersubjective relationships that occur in blogs change, in the form of argumentative techniques used, the notions of news criteria, interfering with scheduling of news and organization of information in digital media outlets. It is also clear the influence that the discursive elements triggered by argumentation exercise in digital media, trying to resize and reframe frames of reality conveyed by it in relation to the subject-readers. Blogs have become part of the scenario information with the emergence of the Internet and are able to interfere in a more effective way to organize the scheduling of media from the conscious utilization of argumentative elements in their posts
Resumo:
The aim of this study is to analyze the effect of migration on the income differential between northeastern migrants and nonmigrants and there by verify that the immigrants make up a group or not positively selected. The assumption that will be tested is that the presence of these immigrants affects income inequality in the region receptor, which may explain part of the high-stopping inequality in the Brazilian Northeast. The study is based on the literature selectivity migration introduced by Roy (1951), Borjas (1987) and Chiswick (1999). Does the estimated wage equation Mincer (1974) through the method of OLS, using information from the microdata sample of the 2010 Census, the Brazilian Institute of Geography and Statistics (IBGE). The results which correspond to the comparison of socioeconomic profile, showed that immigrants are more qualified and, on average, better paid than non-migrants. With the estimation of the model, it was found that, keeping all other variables constant, the income that immigrants earn is 14.43% higher than that of non-migrants. Thus, there was existence of positive selectivity in migration directed to the Northeast
Resumo:
The objective of this work is to identify, to chart and to explain the evolution of the soil occupation and the envirionment vulnerability of the areas of Canto do Amaro and Alto da Pedra, in the city of Mossoró-RN, having as base analyzes it multiweather of images of orbital remote sensors, the accomplishment of extensive integrated works of field to a Geographic Information System (GIS). With the use of inserted techniques of it analyzes space inserted in a (GIS), and related with the interpretation and analyzes of products that comes from the Remote Sensoriamento (RS.), make possible resulted significant to reach the objectives of this works. Having as support for the management of the information, the data set gotten of the most varied sources and stored in digital environment, it comes to constitute the geographic data base of this research. The previous knowledge of the spectral behavior of the natural or artificial targets, and the use of algorithms of Processing of Digital images (DIP), it facilitates the interpretation task sufficiently and searchs of new information on the spectral level. Use as background these data, was generated a varied thematic cartography was: Maps of Geology, Geomorfológicals Units soils, Vegetation and Use and Occupation of the soil. The crossing in environment SIG, of the above-mentioned maps, generated the maps of Natural and Vulnerability envirionmental of the petroliferous fields of I Canto do Amaro and Alto da Pedra-RN, working in an ambient centered in the management of waters and solid residuos, as well as the analysis of the spatial data, making possible then a more complex analysis of the studied area
Resumo:
We presented in this work two methods of estimation for accelerated failure time models with random e_ects to process grouped survival data. The _rst method, which is implemented in software SAS, by NLMIXED procedure, uses an adapted Gauss-Hermite quadrature to determine marginalized likelihood. The second method, implemented in the free software R, is based on the method of penalized likelihood to estimate the parameters of the model. In the _rst case we describe the main theoretical aspects and, in the second, we briey presented the approach adopted with a simulation study to investigate the performance of the method. We realized implement the models using actual data on the time of operation of oil wells from the Potiguar Basin (RN / CE).
Resumo:
In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
Dentre os vários aspectos da saúde do idoso, a saúde bucal merece atenção especial pelo fato de que, historicamente, nos serviços odontológicos, não se considera esse grupo populacional como prioridade de atenção. Por isso, se faz necessária a produção de um indicador multidimensional capaz de mensurar todas as alterações bucais encontradas em um idoso, facilitando a categorização da saúde bucal como um todo. Tal indicador representará um importante instrumento capaz de elencar prioridades de atenção voltadas à população idosa. Portanto, o estudo em questão propõe a produção e validação de um indicador de saúde bucal a partir dos dados secundários coletados pelo projeto SB Brasil 2010 referente ao grupo etário de 65 a 74 anos. A amostra foi representada pelos 7619 indivíduos do grupo etário de 65 a 74 anos que participaram da pesquisa nas 5 (cinco) regiões do Brasil. Tais indivíduos foram submetidos à avaliação epidemiológica das condições de saúde bucal, a partir dos índices CPO-d, CPI e PIP. Além disso, verificou-se o uso e necessidade de prótese, bem como características sociais, econômicas e demográficas. Uma análise fatorial identificou um número relativamente pequeno de fatores comuns, através da análise de componentes principais. Após a nomenclatura dos fatores, foi realizada a soma dos escores fatoriais por indivíduo. Por último, a dicotomização dessa soma nos forneceu o indicador de saúde bucal proposto. Para esse estudo foram incluídas na análise fatorial 12 variáveis de saúde bucal oriundas do banco de dados do SB Brasil 2010 e, também 3 variáveis socioeconômicas e demográficas. Com base no critério de Kaiser, observa-se que foram retidos cinco fatores que explicaram 70,28% da variância total das variáveis incluídas no modelo. O fator 1 (um) explica sozinho 32,02% dessa variância, o fator 2 (dois) 14,78%, enquanto que os fatores 3 (três), 4 (quatro) e 5 (cinco) explicam 8,90%, 7,89% e 6,68%, respectivamente. Por meio das cargas fatoriais, o fator um foi denominado dente hígido e pouco uso de prótese , o dois doença periodontal presente , o três necessidade de reabilitação , já o quarto e quinto fator foram denominados de cárie e condição social favorável , respectivamente. Para garantir a representatividade do indicador proposto, realizou-se uma segunda análise fatorial em uma subamostra da população de idosos investigados. Por outro lado, a aplicabilidade do indicador produzido foi testada por meio da associação do mesmo com outras variáveis do estudo. Por fim, Cabe ressaltar que, o indicador aqui produzido foi capaz de agregar diver sas informações a respeito da saúde bucal e das condições sociais desses indivíduos, traduzindo assim, diversos dados em uma informação simples, que facilita o olhar dos gestores de saúde sobre as reais necessidades de intervenções em relação à saúde bucal de determinada população
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism