949 resultados para Cadastral updating


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this thesis is to show how to use vulnerability testing to identify and search for security flaws in networks of computers. The goal is partly to give a casual description of different types of methods of vulnerability testing and partly to present the method and results from a vulnerability test. A document containing the results of the vulnerability test will be handed over and a solution to the found high risk vulnerabilities. The goal is also to carry out and present this work as a form of a scholarly work.The problem was to show how to perform vulnerability tests and identify vulnerabilities in the organization's network and systems. Programs would be run under controlled circumstances in a way that they did not burden the network. Vulnerability tests were conducted sequentially, when data from the survey was needed to continue the scan.A survey of the network was done and data in the form of operating system, among other things, were collected in the tables. A number of systems were selected from the tables and were scanned with Nessus. The result was a table across the network and a table of found vulnerabilities. The table of vulnerabilities has helped the organization to prevent these vulnerabilities by updating the affected computers. Also a wireless network with WEP encryption, which is insecure, has been detected and decrypted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the years so many academic literatures has revealed that increased number of firms have seen internationalization as a means to gain and sustain competitive advantage and even increase economic of scale, and this has led many western companies to emerging markets. In this paper we discovered that among the pool of Swedish firms, only the MNEs have seen Nigerian market attractive to internationalize to, but just a few of the Swedish SMEs has expanded to the Nigerian market. This research was conducted by doing a qualitative study with the use of phenomenological research approach, during our investigation on the functions of intermediaries in Swedish SMEs internationalization to Nigeria market.Furthermore, we were able to understand the importance and functions of the different marketing intermediaries’ in Swedish SMEs internationalization to Nigeria market. These intermediaries equip the Swedish firms with the required objective knowledge of the Nigerian market, updating them with recent development of the opportunities and threats involved in the Nigerian marketing environment, and linking these Swedish firms to the required government departments, distributors, agent/broker, customers, middle men etc, thereby impacting them with the experiential knowledge. Moreover, it is important for firms to have objective or pre-market knowledge of a particular market before entering that market, but this knowledge is regarded as non-helpful knowledge to firms. But the experiential knowledge is acquired over time in the market, which is regarded as the helpful knowledge. It is evident that the intermediaries equip these firms with both objective and experiential knowledge.Although the opportunities in some emerging markets are very attractive, but the threats in these markets are other factors firms also put into consideration before internationalizing to these markets. This is why thorough market research has to be done so that firms can create effective marketing strategies when they want to expand their marketing activities to emerging markets. Despite the risk and uncertainties involved in doing business in foreign countries, still yet companies selling global products do not have any choice than to internationalize their marketing operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches.  The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bakgrund: Många patienter har behov av palliativ vård innan de avlider. I Sverige finns idag flera olika alternativ för specialiserad palliativ vård. Dessa räcker dock inte till för att vårda alla palliativa patienter vilket leder till att sjuksköterskor som arbetar på somatiska vårdavdelningar på sjukhus ofta möter dessa patienter. Syfte: Att beskriva sjuksköterskors upplevelser av att ge palliativ vård till patienter som vårdas på somatiska vårdavdelningar. Metod: Studien genomfördes som en litteraturöversikt där 15 artiklar ingick i resultatet. Datainsamling har gjorts i databaserna Cinahl, PubMed och Web of Science. Resultat: Sjuksköterskor upplevde att kommunikationen vid vård av palliativa patienter var svår på flera sätt. Rädsla att bemöta existentiella frågor, tidsbrist och en dysfunktionell kommunikation mellan vårdpersonal upplevdes som svårigheter för sjuksköterskorna. Detta ledde till att sjuksköterskor upplevde ett stort behov av utbildning och kompetensutveckling. Slutsats: Ett stort utbildningsbehov finns hos sjuksköterskor då de upplever många svårigheter i att vårda palliativt. För att kunna utveckla denna vårdform på somatiska avdelningar behövs mer utbildning och forskning inom området.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers’ tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification. Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera’s algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day. Approximately 97% successful segmentation rate was achieved using this algorithm.Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim’s shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment’s orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but ?-SVM gives better results in some case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We apply the concept of exchangeable random variables to the case of non-additive robability distributions exhibiting ncertainty aversion, and in the lass generated bya convex core convex non-additive probabilities, ith a convex core). We are able to rove two versions of the law of arge numbers (de Finetti's heorems). By making use of two efinitions. of independence we rove two versions of the strong law f large numbers. It turns out that e cannot assure the convergence of he sample averages to a constant. e then modal the case there is a true" probability distribution ehind the successive realizations of the uncertain random variable. In this case convergence occurs. This result is important because it renders true the intuition that it is possible "to learn" the "true" additive distribution behind an uncertain event if one repeatedly observes it (a sufficiently large number of times). We also provide a conjecture regarding the "Iearning" (or updating) process above, and prove a partia I result for the case of Dempster-Shafer updating rule and binomial trials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

É nítida e rápida a popularização da maioria dos bens duráveis de consumo no Estado de São Paulo, nos últimos vinte anos. Colaboram para isso a universalização da oferta domiciliar de eletricidade, água encanada, esgotos, e a tendência ao barateamento dos bens. Entre outros efeitos, tal popularização deteriora a eficácia das escalas de classificação sócio- econômica baseadas na posse de itens de conforto doméstico, suscitando freqüentes revisões e discórdias no interior da comunidade de marketing. Este estudo sistematiza estatísticas sobre situação domiciliar e posse de itens de conforto doméstico no Estado de S. Paulo, e mostra como isso afeta o consumo material e cultural. Finalmente, aponta que é chegada a hora de se usar mais e melhor variáveis como educação, profissão, qualidade do domicílio, tal como se faz em países desenvolvidos, para o entendimento mais pleno das mudanças que implantam no Brasil a moderna sociedade de consumo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este estudo objetivou analisar de que forma as empresas brasileiras de grande porte utilizamse dos cursos de pós-graduação lato sensu brasileiros na área de Administração de Empresas, como critério de seleção e ferramenta de formação e atualização para executivos, assim como as principais características esperadas de profissionais oriundos destes cursos. Os resultados da pesquisa, realizada junto a profissionais da área de Recursos Humanos das maiores empresas brasileiras, indicaram que a maioria das empresas se utiliza destes cursos como critério de seleção para contratação de profissionais, inclusive sendo a instituição de ensino onde o mesmo foi realizado, diferencial para obtenção da vaga. Da mesma forma, concluiu-se que a maior parte das empresas deste grupo utilizam-se com freqüência destes cursos como ferramenta de atualização e treinamento de profissionais, sendo o curso escolhido geralmente pelo próprio profissional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação busca introduzir no Modelo de Análise de Crédito dos Bancos Comerciais alguns fatores estratégicos fundamentais para boa avaliação e defElrimento de uma operação de crédito de curto prazo. Para este fim foi elaborado um modelo Rating que tem como objetivo levar em consideração além da análise cadastral, econômico-financeira e das garantias, uma visão de todo o conjunto da empresa, dando um peso fundamental na estratégia de atuélção analisando \a competitividade , o ambiente interno e externo para que se possa chegar a um número que irá determinar o deferimento ou não da operação, dentro de parâmetros de risco definido pela instituição financeira detontora do recurso, cumprindo as normas do Sistema Financeiro Nacional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a study carried out with customers with credit card of a large retailer to measure the risk of abandonment of a relationship, when this has already purchase history. Two activities are the most important in this study: the theoretical and methodological procedures. The first step was to the understanding of the problem, the importance of theme and the definition of search methods. The study brings a bibliographic survey comprising several authors and shows that the loyalty of customers is the basis that gives sustainability and profitability for organizations of various market segments, examines the satisfaction as the key to success for achievement and specially for the loyalty of customers. To perform this study were adjusted logistic-linear models and through the test Kolmogorov - Smirnov (KS) and the curve Receiver Operating Characteristic (ROC) selected the best model. Had been used cadastral and transactional data of 100,000 customers of credit card issuer, the software used was SPSS which is a modern system of data manipulation, statistical analysis and presentation graphics. In research, we identify the risk of each customer leave the product through a score.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XML has become an important medium for data exchange, and is frequently used as an interface to - i.e. a view of - a relational database. Although lots of work have been done on querying relational databases through XML views, the problem of updating relational databases through XML views has not received much attention. In this work, we give the rst steps towards solving this problem. Using query trees to capture the notions of selection, projection, nesting, grouping, and heterogeneous sets found throughout most XML query languages, we show how XML views expressed using query trees can be mapped to a set of corresponding relational views. Thus, we transform the problem of updating relational databases through XML views into a classical problem of updating relational databases through relational views. We then show how updates on the XML view are mapped to updates on the corresponding relational views. Existing work on updating relational views can then be leveraged to determine whether or not the relational views are updatable with respect to the relational updates, and if so, to translate the updates to the underlying relational database. Since query trees are a formal characterization of view de nition queries, they are not well suited for end-users. We then investigate how a subset of XQuery can be used as a top level language, and show how query trees can be used as an intermediate representation of view de nitions expressed in this subset.