872 resultados para Systems information
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
This paper focuses on a PV system linked to the electric grid by power electronic converters, identification of the five parameters modeling for photovoltaic systems and the assessment of the shading effect. Normally, the technical information for photovoltaic panels is too restricted to identify the five parameters. An undemanding heuristic method is used to find the five parameters for photovoltaic systems, requiring only the open circuit, maximum power, and short circuit data. The I–V and the P–V curves for a monocrystalline, polycrystalline and amorphous photovoltaic systems are computed from the parameters identification and validated by comparison with experimental ones. Also, the I–V and the P–V curves under the effect of partial shading are obtained from those parameters. The modeling for the converters emulates the association of a DC–DC boost with a two-level power inverter in order to follow the performance of a testing commercial inverter employed on an experimental system.
Resumo:
The present paper is a personal reflection on a work project carried out to promote exports from Portugal to Germany in the IT area, under consideration of the deliverables required by the clients CCILA and Anetie. The project outcome approaches the fact that the majority of the Portuguese market players has disadvantages in size and does rarely coordinate activities among each other, which hinders them to export successfully on a broad scale. To bring together Portuguese delivery potential and German market demand, expert interviews were conducted. Based on the findings, a concept was developed to overcome the domestic collaboration issues in order to strengthen the national exports in the identified sector - embedded systems implementation services for machinery and equipment companies.
Resumo:
Exosomes are small membrane vesicles secreted by most cell types, either normal or malignant and are found in most body fluids such as saliva, plasma and breast milk. In the past decade, the interest in these vesicles has been growing more and more since it was found that besides their beneficial functions such as the removal of cellular debris and unnecessary proteins during cell maturation process, they can also interact with other cells and transfer information between them, thus helping diseases like cancer to progress. The present work intended to use gold nanoparticles as vehicles for gene silencing in an attempt to reduce the tumor-derived exosome secretion, regulated by Rab27a protein, and also aimed to compare the exosome secretion between two breast cell lines, MCF7 and MDA. Changes in RAB27A gene expression were measured by Real-time Quantitative PCR and it was revealed a decreased in RAB27A gene expression, as expected. Exosomes were isolated and purified by two different methods, ultracentrifugation and the commercial kit ExoQuick™ Solution, and further characterized using Western Blot analysis. ExoQuick™ Solution was proven to be the most efficient method for exosome isolation and it was revealed that MDA cells secrete more exosomes. Furthermore, the isolated MCF7-derived exosomes were placed together with a normal bronchial/tracheal epithelial cell line (BTEC) for an additional assay, which aimed to observe the uptake of exosomes by other cells and the exosomes’ capability of promoting cell-cell communication. This observation was made based on alterations in the expression levels of c-Myc and miR-21 genes and the fact that they both have an increased expression in BTEC cells incubated with tumor-derived exosomes when compared to control cells (without incubation with the exosomes) lead us to the conclusion that the exosome uptake and exchange of information between the exosomes and the normal cells did occurred.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
Search is now going beyond looking for factual information, and people wish to search for the opinions of others to help them in their own decision-making. Sentiment expressions or opinion expressions are used by users to express their opinion and embody important pieces of information, particularly in online commerce. The main problem that the present dissertation addresses is how to model text to find meaningful words that express a sentiment. In this context, I investigate the viability of automatically generating a sentiment lexicon for opinion retrieval and sentiment classification applications. For this research objective we propose to capture sentiment words that are derived from online users’ reviews. In this approach, we tackle a major challenge in sentiment analysis which is the detection of words that express subjective preference and domain-specific sentiment words such as jargon. To this aim we present a fully generative method that automatically learns a domain-specific lexicon and is fully independent of external sources. Sentiment lexicons can be applied in a broad set of applications, however popular recommendation algorithms have somehow been disconnected from sentiment analysis. Therefore, we present a study that explores the viability of applying sentiment analysis techniques to infer ratings in a recommendation algorithm. Furthermore, entities’ reputation is intrinsically associated with sentiment words that have a positive or negative relation with those entities. Hence, is provided a study that observes the viability of using a domain-specific lexicon to compute entities reputation. Finally, a recommendation system algorithm is improved with the use of sentiment-based ratings and entities reputation.
Resumo:
In the current global and competitive business context, it is essential that enterprises adapt their knowledge resources in order to smoothly interact and collaborate with others. However, due to the existent multiculturalism of people and enterprises, there are different representation views of business processes or products, even inside a same domain. Consequently, one of the main problems found in the interoperability between enterprise systems and applications is related to semantics. The integration and sharing of enterprises knowledge to build a common lexicon, plays an important role to the semantic adaptability of the information systems. The author proposes a framework to support the development of systems to manage dynamic semantic adaptability resolution. It allows different organisations to participate in a common knowledge base building, letting at the same time maintain their own views of the domain, without compromising the integration between them. Thus, systems are able to be aware of new knowledge, and have the capacity to learn from it and to manage its semantic interoperability in a dynamic and adaptable way. The author endorses the vision that in the near future, the semantic adaptability skills of the enterprise systems will be the booster to enterprises collaboration and the appearance of new business opportunities.
Resumo:
Information security is concerned with the protection of information, which can be stored, processed or transmitted within critical information systems of the organizations, against loss of confidentiality, integrity or availability. Protection measures to prevent these problems result through the implementation of controls at several dimensions: technical, administrative or physical. A vital objective for military organizations is to ensure superiority in contexts of information warfare and competitive intelligence. Therefore, the problem of information security in military organizations has been a topic of intensive work at both national and transnational levels, and extensive conceptual and standardization work is being produced. A current effort is therefore to develop automated decision support systems to assist military decision makers, at different levels in the command chain, to provide suitable control measures that can effectively deal with potential attacks and, at the same time, prevent, detect and contain vulnerabilities targeted at their information systems. The concept and processes of the Case-Based Reasoning (CBR) methodology outstandingly resembles classical military processes and doctrine, in particular the analysis of “lessons learned” and definition of “modes of action”. Therefore, the present paper addresses the modeling and design of a CBR system with two key objectives: to support an effective response in context of information security for military organizations; to allow for scenario planning and analysis for training and auditing processes.
Resumo:
Biometric systems are increasingly being used as a means for authentication to provide system security in modern technologies. The performance of a biometric system depends on the accuracy, the processing speed, the template size, and the time necessary for enrollment. While much research has focused on the first three factors, enrollment time has not received as much attention. In this work, we present the findings of our research focused upon studying user’s behavior when enrolling in a biometric system. Specifically, we collected information about the user’s availability for enrollment in respect to the hand recognition systems (e.g., hand geometry, palm geometry or any other requiring positioning the hand on an optical scanner). A sample of 19 participants, chosen randomly apart their age, gender, profession and nationality, were used as test subjects in an experiment to study the patience of users enrolling in a biometric hand recognition system.
Resumo:
Maturity models are adopted to minimise our complexity perception over a truly complex phenomenon. In this sense, maturity models are tools that enable the assessment of the most relevant variables that impact on the outputs of a specific system. Ideally a maturity model should provide information concerning the qualitative and quantitative relationships between variables and how they affect the latent variable, that is, the maturity level. Management systems (MSs) are implemented worldwide and by an increasing number of companies. Integrated management systems (IMSs) consider the implementation of one or several MSs usually coexisting with the quality management subsystem (QMS). It is intended in this chapter to report a model based on two components that enables the assessment of the IMS maturity, considering the key process agents (KPAs) identified through a systematic literature review and the results collected from two surveys.
Resumo:
Tese de Doutoramento - Programa Doutoral em Engenharia Industrial e Sistemas (PDEIS)
Resumo:
When interacting with each other, people often synchronize spontaneously their movements, e.g. during pendulum swinging, chair rocking[5], walking [4][7], and when executing periodic forearm movements[3].Although the spatiotemporal information that establishes the coupling, leading to synchronization, might be provided by several perceptual systems, the systematic study of different sensory modalities contribution is widely neglected. Considering a) differences in the sensory dominance on the spatial and temporal dimension[5] , b) different cue combination and integration strategies [1][2], and c) that sensory information might provide different aspects of the same event, synchronization should be moderated by the type of sensory modality. Here, 9 naïve participants placed a bottle periodically between two target zones, 40 times, in 12 conditions while sitting in front of a confederate executing the same task. The participant could a) see and hear, b) see , c) hear the confederate, d) or audiovisual information about the movements of the confederate was absent. The couple started in 3 different relative positions (i.e., in-phase, anti-phase, out of phase). A retro-reflective marker was attached to the top of the bottles. Bottle displacement was captured by a motion capture system. We analyzed the variability of the continuous relative phase reflecting the degree of synchronization. Results indicate the emergence of spontaneous synchronization, an increase with bimodal information, and an influence of the initial phase relation on the particular synchronization pattern. Results have theoretical implication for studying cue combination in interpersonal coordination and are consistent with coupled oscillator models.
Resumo:
To better understand the dynamic behavior of metabolic networks in a wide variety of conditions, the field of Systems Biology has increased its interest in the use of kinetic models. The different databases, available these days, do not contain enough data regarding this topic. Given that a significant part of the relevant information for the development of such models is still wide spread in the literature, it becomes essential to develop specific and powerful text mining tools to collect these data. In this context, this work has as main objective the development of a text mining tool to extract, from scientific literature, kinetic parameters, their respective values and their relations with enzymes and metabolites. The approach proposed integrates the development of a novel plug-in over the text mining framework @Note2. In the end, the pipeline developed was validated with a case study on Kluyveromyces lactis, spanning the analysis and results of 20 full text documents.
Resumo:
Schizophrenia stands for a long-lasting state of mental uncertainty that may bring to an end the relation among behavior, thought, and emotion; that is, it may lead to unreliable perception, not suitable actions and feelings, and a sense of mental fragmentation. Indeed, its diagnosis is done over a large period of time; continuos signs of the disturbance persist for at least 6 (six) months. Once detected, the psychiatrist diagnosis is made through the clinical interview and a series of psychic tests, addressed mainly to avoid the diagnosis of other mental states or diseases. Undeniably, the main problem with identifying schizophrenia is the difficulty to distinguish its symptoms from those associated to different untidiness or roles. Therefore, this work will focus on the development of a diagnostic support system, in terms of its knowledge representation and reasoning procedures, based on a blended of Logic Programming and Artificial Neural Networks approaches to computing, taking advantage of a novel approach to knowledge representation and reasoning, which aims to solve the problems associated in the handling (i.e., to stand for and reason) of defective information.