945 resultados para Information-Analytical System


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current commercial and academic OLAP tools do not process XML data that contains XLink. Aiming at overcoming this issue, this paper proposes an analytical system composed by LMDQL, an analytical query language. Also, the XLDM metamodel is given to model cubes of XML documents with XLink and to deal with syntactic, semantic and structural heterogeneities commonly found in XML documents. As current W3C query languages for navigating in XML documents do not support XLink, XLPath is discussed in this article to provide features for the LMDQL query processing. A prototype system enabling the analytical processing of XML documents that use XLink is also detailed. This prototype includes a driver, named sql2xquery, which performs the mapping of SQL queries into XQuery. To validate the proposed system, a case study and its performance evaluation are presented to analyze the impact of analytical processing over XML/XLink documents.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research project object of this thesis is focused on the development of an advanced analytical system based on the combination of an improved thin layer chromatography (TLC) plate coupled with infrared (FTIR) and Raman microscopies for the detection of synthetic dyes. Indeed, the characterization of organic colorants, which are commonly present in mixtures with other components and in a very limited amount, still represents a challenging task in scientific analyses of cultural heritage materials. The approach provides selective spectral fingerprints for each compound, foreseeing the complementary information obtained by micro ATR-RAIRS-FTIR and SERS-Raman analyses, which can be performed on the same separated spot. In particular, silver iodide (AgI) applied on a gold coated slide is proposed as an efficient stationary phase for the discrimination of complex analyte mixtures, such as dyes present in samples of art-historical interest. The gold-AgI-TLC plate shows high performances related both to the chromatographic separation of analytes and to the spectroscopic detection of components. The use of a mid-IR transparent inorganic salt as the stationary phase avoids interferences of the background absorption in FTIR investigations. Moreover, by ATR microscopy measurements performed on the gold-AgI surface, a considerable enhancement in the intensity of spectra is observed. Complementary information can be obtained by Raman analyses, foreseeing a SERS activity of the AgI substrate. The method has been tested for the characterization of a mixture of three synthetic organic colorants widely used in dyeing processes: Brilliant Green (BG1), Rhodamine B (BV10) and Methylene Blue (BB9).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the first participation of IR-n system at Spoken Document Retrieval, focusing on the experiments we made before participation and showing the results we obtained. IR-n system is an Information Retrieval system based on passages and the recognition of sentences to define them. So, the main goal of this experiment is to adapt IR-n system to the spoken document structure by means of the utterance splitter and the overlapping passage technique allowing to match utterances and sentences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we explore the use of semantic classes in an existing information retrieval system in order to improve its results. Thus, we use two different ontologies of semantic classes (WordNet domain and Basic Level Concepts) in order to re-rank the retrieved documents and obtain better recall and precision. Finally, we implement a new method for weighting the expanded terms taking into account the weights of the original query terms and their relations in WordNet with respect to the new ones (which have demonstrated to improve the results). The evaluation of these approaches was carried out in the CLEF Robust-WSD Task, obtaining an improvement of 1.8% in GMAP for the semantic classes approach and 10% in MAP employing the WordNet term weighting approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The construction industry is characterised by fragmentation and suffers from lack of collaboration, often adopting adversarial working practices to achieve deliverables. For the UK Government and construction industry, BIM is a game changer aiming to rectify this fragmentation and promote collaboration. However it has become clear that there is an essential need to have better controls and definitions of both data deliverables and data classification. Traditional methods and techniques for collating and inputting data have shown to be time consuming and provide little to improve or add value to the overall task of improving deliverables. Hence arose the need in the industry to develop a Digital Plan of Work (DPoW) toolkit that would aid the decision making process, providing the required control over the project workflows and data deliverables, and enabling better collaboration through transparency of need and delivery. The specification for the existing Digital Plan of Work (DPoW) was to be, an industry standard method of describing geometric, requirements and data deliveries at key stages of the project cycle, with the addition of a structured and standardised information classification system. However surveys and interviews conducted within this research indicate that the current DPoW resembles a digitised version of the pre-existing plans of work and does not push towards the data enriched decision-making abilities that advancements in technology now offer. A Digital Framework is not simply the digitisation of current or historic standard methods and procedures, it is a new intelligent driven digital system that uses new tools, processes, procedures and work flows to eradicate waste and increase efficiency. In addition to reporting on conducted surveys above, this research paper will present a theoretical investigation into usage of Intelligent Decision Support Systems within a digital plan of work framework. Furthermore this paper will present findings on the suitability to utilise advancements in intelligent decision-making system frameworks and Artificial Intelligence for a UK BIM Framework. This should form the foundations of decision-making for projects implemented at BIM level 2. The gap identified in this paper is that the current digital toolkit does not incorporate the intelligent characteristics available in other industries through advancements in technology and collation of vast amounts of data that a digital plan of work framework could have access to and begin to develop, learn and adapt for decision-making through the live interaction of project stakeholders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The project “Reference in Discourse” deals with the selection of a specific object from a visual scene in a natural language situation. The goal of this research is to explain this everyday discourse reference task in terms of a concept generation process based on subconceptual visual and verbal information. The system OINC (Object Identification in Natural Communicators) aims at solving this problem in a psychologically adequate way. The system’s difficulties occurring with incomplete and deviant descriptions correspond to the data from experiments with human subjects. The results of these experiments are reported.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At present, in large precast concrete enterprises, the management over precast concrete component has been chaotic. Most enterprises take labor-intensive manual input method, which is time consuming and laborious, and error-prone. Some other slightly better enterprises choose to manage through bar-code or printing serial number manually. However, on one hand, this is also labor-intensive, on the other hand, this method is limited by external environment, making the serial number blur or even lost, and also causes a big problem on production traceability and quality accountability. Therefore, to realize the enterprise’s own rapid development and cater to the needs of the time, to achieve the automated production management has been a big problem for a modern enterprise. In order to solve the problem, inefficiency in production and traceability of the products, this thesis try to introduce RFID technology into the production of PHC tubular pile. By designing a production management system of precast concrete components, the enterprise will achieve the control of the entire production process, and realize the informatization of enterprise production management. RFID technology has been widely used in many fields like entrance control, charge management, logistics and so on. RFID technology will adopt passive RFID tag, which is waterproof, shockproof, anti-interference, so it’s suitable for the actual working environment. The tag will be bound to the precast component steel cage (the structure of the PHC tubular pile before the concrete placement), which means each PHC tubular pile will have a unique ID number. Then according to the production procedure, the precast component will be performed with a series of actions, put the steel cage into the mold, mold clamping, pouring concrete (feed), stretching, centrifugalizing, maintenance, mold removing, welding splice. In every session of the procedure, the information of the precast components can be read through a RFID reader. Using a portable smart device connected to the database, the user can check, inquire and management the production information conveniently. Also, the system can trace the production parameter and the person in charge, realize the traceability of the information. This system can overcome the disadvantages in precast components manufacturers, like inefficiency, error-prone, time consuming, labor intensity, low information relevance and so on. This system can help to improve the production management efficiency, and can produce a good economic and social benefits, so, this system has a certain practical value.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

International audience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Analisar as características das quedas no grupo etário com 60 anos ou mais, com ênfase nas quedas no mesmo nível, residentes no Estado de São Paulo, a partir da análise das diferentes fontes de informação oficiais. MÉTODOS: Foram analisadas as 1.328 mortes registradas no SIM em 2007, 20.726 internações no SIH/SUS em 2008 e os 359 atendimentos realizados em 24 UEs do Estado de São Paulo em 2007. Um teste de regressão logística foi utilizado para testar associações entre variáveis nos atendimentos em emergências. RESULTADOS: O sexo masculino preponderou nas mortes (51,2 %) enquanto o sexo feminino preponderou nas internações (61,1%) e atendimentos em emergências (60,4%). O coeficiente de mortalidade foi 31/100.000 habitantes, aumentando com a idade e atingindo o valor de 110,7/100.000 habitantes na faixa de 80 anos e mais. As quedas no mesmo nível foram responsáveis pela maior proporção de mortes definidas (35%), nas internações (47,5%) e também nas emergências (66%), crescendo de importância com o aumento das faixas etárias. A residência foi o local de ocorrência em 65,8% dos casos atendidos nas emergências. Os traumatismos de cabeça assumem importância nas mortes; as fraturas de fêmur foram as lesões mais frequentes nas internações e emergências. Nas emergências, as mulheres foram 1,55 vezes significativamente mais prováveis de serem atendidas por uma queda do que pelas outras causas externas que os homens. Comparativamente à faixa de 60 a 69 anos, os indivíduos na faixa de 70 a 79 anos foram 2,10 vezes e os indivíduos de 80 anos e mais foram 2,26 vezes significativamente mais prováveis de serem atendidos por uma queda do que pelas outras causas externas. Não houve diferença estatisticamente significante quanto ao sexo ou faixa etária quando se comparou os indivíduos que sofreram quedas no mesmo nível e outros tipos de queda. CONCLUSÃO: Recomenda-se que a prevenção das quedas entre idosos entre na pauta de discussão das políticas públicas sem mais demora.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The integration of optical detection methods in continuous flow microsystems can highly extend their range of application, as long as some negative effects derived from their scaling down can be minimized. Downsizing affects to a greater extent the sensitivity of systems based on absorbance measurements than the sensitivity of those based on emission ones. However, a careful design of the instrumental setup is needed to maintain the analytical features in both cases. In this work, we present the construction and evaluation of a simple miniaturized optical system, which integrates a novel flow cell configuration to carry out chemiluminescence (CL) measurements using a simple photodiode. It consists of a micro-mixer based on a vortex structure, which has been constructed by means of the low-temperature cofired ceramics (LTCC) technology. This mixer not only efficiently promotes the CL reaction due to the generated high turbulence but also allows the detection to be carried out in the same area, avoiding intensity signal losses. As a demonstration, a flow injection system has been designed and optimized for the detection of cobalt(H) in water samples. It shows a linear response between 2 and 20 mu M with a correlation of r > 0.993, a limit of detection of 1.1 mu M, a repeatability of RSD = 12.4 %, and an analysis time of 17 s. These results demonstrate the suitability of the proposal to the determination of compounds involved in CL reactions by means of an easily constructed versatile device based on low-cost instrumentation.