985 resultados para Link information
Resumo:
Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.
Resumo:
The performances of high-speed network communications frequently rest with the distribution of data-stream. In this paper, a dynamic data-stream balancing architecture based on link information is introduced and discussed firstly. Then the algorithms for simultaneously acquiring the passing nodes and links of a path between any two source-destination nodes rapidly, as well as a dynamic data-stream distribution planning are proposed. Some related topics such as data fragment disposal, fair service, etc. are further studied and discussed. Besides, the performance and efficiency of proposed algorithms, especially for fair service and convergence, are evaluated through a demonstration with regard to the rate of bandwidth utilization. Hoping the discussion presented here can be helpful to application developers in selecting an effective strategy for planning the distribution of data-stream.
Resumo:
Suitable computacional tools allow to build applications that can link information to its physical location, and represent them into visual and interactive schemes, e ectively reaching the power of visual comunication. This leads the user to synthesize information in a simple and e cient way. These applications are linked to the de nition of Geographic Information System (GIS). GIS are comprised by many concepts and tools, which have the main purpose of collecting, storing, viewing and processing spatial data, obtaining the information needed for decision making. Within this context, this paper presents the Conception and Implementation of a Control System for Urban Forestry through Integration of Free and Open Source Software. This conception arose from the need of an Environmental Project developed by the Agriculture's House of the city of Regente Feij o, which has as main objectives cataloging and management of urban a orestation of the municipality. Due to this diversity of concepts, the challenge in building this system is the integration of platforms that are involved in all stages: collecting and storage of data, including maps and other spatial information, operations on the stored information, obtaining results and graphical visualization of the same. After implementation, it was possible to provide for the system users an improvement in the capacity of perception in the information analysis and facilitate the process of decision making
Resumo:
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results: We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions: CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.
Resumo:
Along with the growing complexity of logistic chains the demand for transparency of informations has increased. The use of intelligent RFID-Technology offers the possibility to optimize and control all capacities in use, since it enables the identification and tracking of goods alongside the entire supply chain. Every single product can be located at any given time and a multitude of current and historical data can be transferred. The interaction of the flow of material and the flow of information between the various process steps can be optimized by using RFID-Technology since it guarantees that all required data is available at the right time and at the right place. The local accessibility and convertibility of data allows a flexible, decentralised control of logistic systems. As additional advantages of RFID-Components can be considered that they are individually writable and that their identification can be achieved over considerable distances even if there is no intervisibility between tag and reader. The use of RFID-Transponder opens up new potentials regarding process security, reduction of logistic costs or availability of products. These advantages depend on reliability of the identification processes. The undisputed potentials that are made accessible by the use of RFID-Elements can only be beneficial when the informations that are decentralised and attached to goods and loading equipment can be reliably retrieved at the required points. The communication between tag and reader can be influenced by different materials such as metal, that can disturbed or complicate the radio contact. The communications reliability is subject of various tests and experiments that analyse the effects of different filling materials as well as different alignments of tags on the loading equipment.
Resumo:
Over the last couple of years there has been an ongoing debate on how sales managers contribute to organizational value. Direct measures between sales-marketing interface quality and company performance are compromised, as company performance is influenced by a plethora of other factors. We advocate that the use of sales information is the missing link between sales-marketing relationship quality and organizational outcomes. We propose and empirically test a model on how sales-marketing interface quality affects managerial use of sales information, which in turn leads to enhanced organizational performance. We found that marketing managers rely on sales information if they think that their sales counterpart is trustworthy. Integration between the sales-marketing function contributes to a trust-based relationship.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
Resumo:
In the present research, a reconceptualisation of the role of norms in the link between prejudiced attitudes and discriminatory behaviour — along the lines suggested by the social identity perspective — was tested. In the first study, group salience and group norm were manipulated. As expected, participants ascribed negative traits to significantly fewer Asian university students when they had received consensus information along these lines from a salient ingroup rather than from a salient outgroup. These results were replicated on a measure of strength of motivation to appear nonprejudiced. In a second study, group salience and norm were once again manipulated and strength of attitude and perceived group threat were measured. As predicted, people's negative attitudes towards globalisation were more likely to predict congruent behavioural responses to the extent that the group norm supported the attitude and group salience was high, particularly when high levels of group threat were perceived.
Resumo:
Objective: To determine the degree of knowledge that cardiologists from Sao Paulo, Brazil, have regarding a low-prevalent entity associated with a high rate of sudden death-Brugada syndrome. Methods: Two hundred forty-four cardiologists were interviewed by an instrument divided in two parts: in the first, we recorded gender, age, and data related to academic profile. The second-answered only by the professionals that manifested having some degree of knowledge on the syndrome-had 28 questions that evaluated their knowledge. The answers were spontaneous and they did not have a chance to consult. We used uni- and multivariate analysis on the average percentage of right and wrong answers, and the influence of the academic profile. Results: The predominant gender was the male gender (61.1%), the average age was 44.32 +/- 10.83 years, 40% with more than 20 years after obtaining their degree, 44% were educated in public institutions, 69% had a residency in cardiology, 20% had overseas practice, 12% had postdegree, 41% were linked to an educational institution, 24% with publication(s) in an indexed journal, 17.2% were authors of chapters in books, 2.5% had edited books, and 10% were linked to the Brazilian Society of Cardiac Arrhythmias. The average percentage of right answers was 45.7%. Conclusion: The sample studied revealed a little knowledge on the entity. A residency in cardiology was the factor of greater significance in the percentage of right answers. Other significant factors were the link of the interviewed person to an educational institution, or the Brazilian Society of Cardiac Arrhythmias, and having a specialist degree.
Resumo:
This paper examines the use of on-line discussion as a medium for learning in a pre-service teacher education program. As part of an Education Studies course student teachers engaged in a discussion of issues related to technology and equity in schools. The design of the task and the subsequent analysis of the on-line text were part of a research project investigating whether and how communications technology can be used to integrate and extend the learning of teacher education students. The main argument developed in the paper is that through the on-line activity distinctive sets of writing practices were created. These practices enabled students to make connections between the often disparate parts of teacher education programs-theory and practice, campus and school, research and experience. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Mestrado em Engenharia Informática, Área de Especialização em Arquiteturas, Sistemas e Redes