969 resultados para Databases - Duplicate tuples
Resumo:
The study of electricity markets operation has been gaining an increasing importance in last years, as result of the new challenges that the electricity markets restructuring produced. This restructuring increased the competitiveness of the market, but with it its complexity. The growing complexity and unpredictability of the market’s evolution consequently increases the decision making difficulty. Therefore, the intervenient entities are forced to rethink their behaviour and market strategies. Currently, lots of information concerning electricity markets is available. These data, concerning innumerous regards of electricity markets operation, is accessible free of charge, and it is essential for understanding and suitably modelling electricity markets. This paper proposes a tool which is able to handle, store and dynamically update data. The development of the proposed tool is expected to be of great importance to improve the comprehension of electricity markets and the interactions among the involved entities.
Resumo:
In many countries the use of renewable energy is increasing due to the introduction of new energy and environmental policies. Thus, the focus on the efficient integration of renewable energy into electric power systems is becoming extremely important. Several European countries have already achieved high penetration of wind based electricity generation and are gradually evolving towards intensive use of this generation technology. The introduction of wind based generation in power systems poses new challenges for the power system operators. This is mainly due to the variability and uncertainty in weather conditions and, consequently, in the wind based generation. In order to deal with this uncertainty and to improve the power system efficiency, adequate wind forecasting tools must be used. This paper proposes a data-mining-based methodology for very short-term wind forecasting, which is suitable to deal with large real databases. The paper includes a case study based on a real database regarding the last three years of wind speed, and results for wind speed forecasting at 5 minutes intervals.
Resumo:
To determine the precision and agreement of the hemoglobin (Hb) measurements in capillary and venous blood samples by the HemoCue® and an automated counter. Hb was determined by both equipaments in blood samples of 29 pregnant women. The HemoCue® showed low repeatability of Hb measurements in duplicate in capillary (CR=0.53 g/dL, CV=13.6%) and venous blood (CR=0.53 g/dL, CV=13.6%). Hb measurements in capillary blood were higher than those in venous blood (12.4 and 11.7 g/dL, respectively; p<0.05). There was high agreement between Hb in capillary blood by the HemoCue® and in venous blood by the counter (r icc=0.86; p<0.01), and also between the diagnosis of anemia by both equipments (k=0.81; p<0.01). The HemoCue® seems to be more appropriate for capillary blood and require training of the measurers.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.
Resumo:
Purpose: Systematic review to identify the factors associated to the quality of life (QOL) of the caregivers of people with aphasia (PWA). Methods: Studies were searched using Medline, Pubmed, Cochrane Library, CINAHL, PsycINFO and Web of Science databases. Peer-reviewed papers that studied the QOL of PWA’s caregivers or the consequences of aphasia in caregivers’ life were included. Findings were extracted from the studies that met the inclusion criteria. Results: No data is available reporting particularly the QOL of PWA caregivers’ or their QOL predictors. Nevertheless, it was possible to extract aspects related to QOL from the studies that report the consequences of aphasia, and life changes in PWA’s caregivers. Nine (9) studies including PWA’s caregivers were found, but only 5 reported data separately on them. Methodological heterogeneity impedes cross-study comparisons, although some considerations can be made. PWA’s caregivers reported life changes such as: loss of freedom; social isolation; new responsibilities; anxiety; emotional loneliness; need for support and respite. Conclusions: Changes in social relationships, in emotional status, increased burden and need for support and respite were experienced by PWA’s caregivers. Stroke QOL studies need to include PWA caregivers’ and report separately on them. Further research is needed in this area in order to determine their QOL predictors and identify what interventions and referrals better suit their needs.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação de Mestre António da Silva Vieira
Resumo:
Editors of scientific journals need to be conversant with the mechanisms by which scientific misconduct is amplified by publication practices. This paper provides definitions, ways to document the extent of the problem, and examples of editorial attempts to counter fraud. Fabrication, falsification, duplication, ghost authorship, gift authorship, lack of ethics approval, non-disclosure, 'salami' publication, conflicts of interest, auto-citation, duplicate submission, duplicate publications, and plagiarism are common problems. Editorial misconduct includes failure to observe due process, undue delay in reaching decisions and communicating these to authors, inappropriate review procedures, and confounding a journal's content with its advertising or promotional potential. Editors also can be admonished by their peers for failure to investigate suspected misconduct, failure to retract when indicated, and failure to abide voluntarily by the six main sources of relevant international guidelines on research, its reporting and editorial practice. Editors are in a good position to promulgate reasonable standards of practice, and can start by using consensus guidelines on publication ethics to state explicitly how their journals function. Reviewers, editors, authors and readers all then have a better chance to understand, and abide by, the rules of publishing.
Resumo:
The objective of this paper is to review and discuss the literature about volunteers’ motivations to donate their time to NGOs (Non Governmental Organisations). According to Parboteeah, Cullenb & Lim (2004) management research has not paid much attention to voluntarism, however, voluntarism is a substantial part of productive work for many societies. Wilson & Pimm (1996) show that in Great Britain about 39% of the adult population has been involved in some volunteer activity for some period of time. In the U.S.A. these values reach 50% (Wilson & Pimm, 1996). Considering the benefits that voluntarism can bring to an organisation, we understand that more attention must be devoted to this phenomenon. The more an organisation knows volunteers, the better this organisation will be able to meet the needs and expectations of these individuals. We present a literature review that illustrates and compares the different motivations associated with volunteer work. The paper includes a bibliographical databases search in specialised journals. The search used the key words “motivations” and “voluntarism” (in the heading and text body) and covered all numbers between 2000 and 2007. We identify the existence of repeated motivations (Holmberg & Söderlung, 2005; Prouteau & Wolff, 2008; Soupourmas & Ironmonger, 2001; Yavas & Riecken, 1997), which allow the establishment of a typology of volunteers’ motivations, based on four categories: altruism, social needs, self-esteem, learning and self-development. Finally we identify three main gaps in the literature that justify further research. First, research focusing on the differences between motivations related to volunteers’ "Attraction" versus "Retention" in NGO’s is nil. Second, the great majority of the studies rely on north American (USA and Canada) and Australian context, which demands for further research in European countries. Third, the majority of NGOs researched are related to sport, art or the environment, and it would be interesting to explore the relationship between motivation and NGO type. These questions may obtain interesting answers for NGO management, in particular with regard to volunteer attraction and retention.
Resumo:
Este trabalho revê e discute a literatura sobre as motivações dos voluntários para doarem o seu tempo às ONG’s. Quanto melhor uma organização conhecer os voluntários, mais essa organização poderá ir de encontro às necessidades e expectativas desses mesmos indivíduos. Por isso, compreender as motivações que podem levar um indivíduo a doar o seu tempo a uma determinada organização é relevante na gestão das ONG’s. Primeiro, o artigo discute o estado da arte do voluntariado formal e as motivações dos indivíduos voluntários não dirigentes. Apresenta-se uma pesquisa a bases de dados bibliográficas, que inclui revistas especializadas na investigação de voluntariado. Depois, o artigo mostra e compara os diferentes tipos de motivações associadas ao trabalho voluntário e propõe uma tipologia que agrupa as motivações dos voluntários em quatro tipos: altruísmo, pertença, ego e reconhecimento social e aprendizagem e desenvolvimento. Por fim, efectua-se uma análise que indica três lacunas na literatura das motivações dos voluntários que justificam investigação adicional: (i) a omissão de diferenças entre as motivações relacionadas com a “Atracção” versus a “Retenção” dos voluntários; (ii) a focalização das investigações no contexto norte-americano e australiano; e (iii) a ausência de análises comparativas que relacionem as motivações por tipos de ONG’s.
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files.
Resumo:
Dissertação de Mestrado em Finanças Empresariais