950 resultados para Online services using open-source NLP tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade advances and innovations from Silicon Photonics technology were observed in the telecommunications and computing industries. This technology which employs Silicon as an optical medium, relies on current CMOS micro-electronics fabrication processes to enable medium scale integration of many nano-photonic devices to produce photonic integrated circuitry. ^ However, other fields of research such as optical sensor processing can benefit from silicon photonics technology, specially in sensors where the physical measurement is wavelength encoded. ^ In this research work, we present a design and application of a thermally tuned silicon photonic device as an optical sensor interrogator. ^ The main device is a micro-ring resonator filter of 10 μm of diameter. A photonic design toolkit was developed based on open source software from the research community. With those tools it was possible to estimate the resonance and spectral characteristics of the filter. From the obtained design parameters, a 7.8 × 3.8 mm optical chip was fabricated using standard micro-photonics techniques. In order to tune a ring resonance, Nichrome micro-heaters were fabricated on top of the device. Some fabricated devices were systematically characterized and their tuning response were determined. From measurements, a ring resonator with a free-spectral-range of 18.4 nm and with a bandwidth of 0.14 nm was obtained. Using just 5 mA it was possible to tune the device resonance up to 3 nm. ^ In order to apply our device as a sensor interrogator in this research, a model of wavelength estimation using time interval between peaks measurement technique was developed and simulations were carried out to assess its performance. To test the technique, an experiment using a Fiber Bragg grating optical sensor was set, and estimations of the wavelength shift of this sensor due to axial strains yield an error within 22 pm compared to measurements from spectrum analyzer. ^ Results from this study implies that signals from FBG sensors can be processed with good accuracy using a micro-ring device with the advantage of ts compact size, scalability and versatility. Additionally, the system also has additional applications such as processing optical wavelength shifts from integrated photonic sensors and to be able to track resonances from laser sources.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2016 Springer Science+Business Media New YorkResearchers studying mammalian dentitions from functional and adaptive perspectives increasingly have moved towards using dental topography measures that can be estimated from 3D surface scans, which do not require identification of specific homologous landmarks. Here we present molaR, a new R package designed to assist researchers in calculating four commonly used topographic measures: Dirichlet Normal Energy (DNE), Relief Index (RFI), Orientation Patch Count (OPC), and Orientation Patch Count Rotated (OPCR) from surface scans of teeth, enabling a unified application of these informative new metrics. In addition to providing topographic measuring tools, molaR has complimentary plotting functions enabling highly customizable visualization of results. This article gives a detailed description of the DNE measure, walks researchers through installing, operating, and troubleshooting molaR and its functions, and gives an example of a simple comparison that measured teeth of the primates Alouatta and Pithecia in molaR and other available software packages. molaR is a free and open source software extension, which can be found at the doi:10.13140/RG.2.1.3563.4961(molaR v. 2.0) as well as on the Internet repository CRAN, which stores R packages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here, we describe gene expression compositional assignment (GECA), a powerful, yet simple method based on compositional statistics that can validate the transfer of prior knowledge, such as gene lists, into independent data sets, platforms and technologies. Transcriptional profiling has been used to derive gene lists that stratify patients into prognostic molecular subgroups and assess biomarker performance in the pre-clinical setting. Archived public data sets are an invaluable resource for subsequent in silico validation, though their use can lead to data integration issues. We show that GECA can be used without the need for normalising expression levels between data sets and can outperform rank-based correlation methods. To validate GECA, we demonstrate its success in the cross-platform transfer of gene lists in different domains including: bladder cancer staging, tumour site of origin and mislabelled cell lines. We also show its effectiveness in transferring an epithelial ovarian cancer prognostic gene signature across technologies, from a microarray to a next-generation sequencing setting. In a final case study, we predict the tumour site of origin and histopathology of epithelial ovarian cancer cell lines. In particular, we identify and validate the commonly-used cell line OVCAR-5 as non-ovarian, being gastrointestinal in origin. GECA is available as an open-source R package.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anualmente, realizam-se no país inúmeras iniciativas de Todo-Terreno Turístico (TTT) onde são automaticamente registadas as coordenadas de Global Positioning System (GPS) por aplicações de dispositivos móveis. Este tipo de informação pode ser utilizada, quer para fins de divulgação turística, quer por outro tipo de entidades que necessitem de circular nesses caminhos rurais, tipicamente no meio da montanha. Entre outras, são registadas a posição, velocidade e altitude do veículo, o que permite obter informações relevantes, tais como, se o percurso se encontra transitável ou qual a velocidade recomendada. Por exemplo, durante os combates a incêndios, os bombeiros e proteção civil poderão saber se estes percursos são utilizáveis no planeamento dos combates a incêndios com reduzida probabilidade de complicações relativa ao acesso dos veículos, melhorando assim o tempo de resposta. O presente documento discute como poderá ser concebida uma aplicação web mapping, de código aberto, que permita a partilha, utilização e valorização de dados relativos aos percursos todo-terreno dos praticantes de TTT. O presente documento descreve como a aplicação desenvolvida no âmbito da dissertação de mestrado permite selecionar e ordenar possíveis trajetos que incluem os trajetos de TTT, apresentando as caraterísticas do terreno de modo a auxiliar a tomada de decisão por membros das corporações de Bombeiros. Será igualmente apresentada a interface atual da aplicação que inclui um mapa dinâmico e um gestor de pontos de referência.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to investigate the effect of non-audit services on auditor independence, and the importance of non-audit services as a source of income for audit firms in the United Kingdom. Design/method/approach – This study will examine 11 companies in the food retail- and wholesale industry during 2007 - 2014. Five indicators have been used; (1) Appointed auditor and provision of non-audit services to audit clients; (2) Auditor tenure; (3) Non-audit services in relation to total services; (4) Tax-services in relation to non-audit services, (5) Big Four’s revenue. Information has been collected using the quantitative approach through annual- and transparency reports. The threshold used to measure possible independence threats (self-review-, self-interest- and familiarity threat) has been set at 18,5 %. Findings – This study concludes that the jointly provision of audit- and nonaudit services possibly causes impairment of auditor independence, and that non-audit services is an important source of income for audit firms. The findings showed that in 99 %, companies purchased non-audit services from their statutory auditor. Non-audit services in relation to total services surpassed the threshold in 78 % of all financial years. Likewise, tax-services in comparison to non-audit services exceeded the threshold in 65 % of all financial years. The Big Four’s revenue from non-audit services to audit clients in relation to total revenue is almost constantly below the threshold. However, in all financial years except from one, total revenue from non-audit services surpassed revenue from audit services by far. Contribution – The study contributes to the ongoing discussion about nonaudit services effect on auditor independence. Originality/value – This study is one of few that provide detailed information about non-audit services in the food retail- and wholesale industry. It highlights social and ethical issues with regard to agency relationships.  

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A collaboration between dot.rural at the University of Aberdeen and the iSchool at Northumbria University, POWkist is a pilot-study exploring potential usages of currently available linked datasets within the cultural heritage domain. Many privately-held family history collections (shoebox archives) remain vulnerable unless a sustainable, affordable and accessible model of citizen-archivist digital preservation can be offered. Citizen-historians have used the web as a platform to preserve cultural heritage, however with no accessible or sustainable model these digital footprints have been ad hoc and rarely connected to broader historical research. Similarly, current approaches to connecting material on the web by exploiting linked datasets do not take into account the data characteristics of the cultural heritage domain. Funded by Semantic Media, the POWKist project is investigating how best to capture, curate, connect and present the contents of citizen-historians’ shoebox archives in an accessible and sustainable online collection. Using the Curios platform - an open-source digital archive - we have digitised a collection relating to a prisoner of war during WWII (1939-1945). Following a series of user group workshops, POWkist is now connecting these ‘made digital’ items with the broader web using a semantic technology model and identifying appropriate linked datasets of relevant content such as DBPedia (an archived linked dataset of Wikipedia) and Ordnance Survey Open Data. We are analysing the characteristics of cultural heritage linked datasets, so that these materials are better visualised, contextualised and presented in an attractive and comprehensive user interface. Our paper will consider the issues we have identified, the solutions we are developing and include a demonstration of our work-in-progress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade advances and innovations from Silicon Photonics technology were observed in the telecommunications and computing industries. This technology which employs Silicon as an optical medium, relies on current CMOS micro-electronics fabrication processes to enable medium scale integration of many nano-photonic devices to produce photonic integrated circuitry. However, other fields of research such as optical sensor processing can benefit from silicon photonics technology, specially in sensors where the physical measurement is wavelength encoded. In this research work, we present a design and application of a thermally tuned silicon photonic device as an optical sensor interrogator. The main device is a micro-ring resonator filter of 10 $\mu m$ of diameter. A photonic design toolkit was developed based on open source software from the research community. With those tools it was possible to estimate the resonance and spectral characteristics of the filter. From the obtained design parameters, a 7.8 x 3.8 mm optical chip was fabricated using standard micro-photonics techniques. In order to tune a ring resonance, Nichrome micro-heaters were fabricated on top of the device. Some fabricated devices were systematically characterized and their tuning response were determined. From measurements, a ring resonator with a free-spectral-range of 18.4 nm and with a bandwidth of 0.14 nm was obtained. Using just 5 mA it was possible to tune the device resonance up to 3 nm. In order to apply our device as a sensor interrogator in this research, a model of wavelength estimation using time interval between peaks measurement technique was developed and simulations were carried out to assess its performance. To test the technique, an experiment using a Fiber Bragg grating optical sensor was set, and estimations of the wavelength shift of this sensor due to axial strains yield an error within 22 pm compared to measurements from spectrum analyzer. Results from this study implies that signals from FBG sensors can be processed with good accuracy using a micro-ring device with the advantage of ts compact size, scalability and versatility. Additionally, the system also has additional applications such as processing optical wavelength shifts from integrated photonic sensors and to be able to track resonances from laser sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Custom cranio-orbital implants have been shown to achieve better performance than their hand-shaped counterparts by restoring skull anatomy more accurately and by reducing surgery time. Designing a custom implant involves reconstructing a model of the patient's skull using their computed tomography (CT) scan. The healthy side of the skull model, contralateral to the damaged region, can then be used to design an implant plan. Designing implants for areas of thin bone, such as the orbits, is challenging due to poor CT resolution of bone structures. This makes preoperative design time-intensive since thin bone structures in CT data must be manually segmented. The objective of this thesis was to research methods to accurately and efficiently design cranio-orbital implant plans, with a focus on the orbits, and to develop software that integrates these methods. Methods: The software consists of modules that use image and surface restoration approaches to enhance both the quality of CT data and the reconstructed model. It enables users to input CT data, and use tools to output a skull model with restored anatomy. The skull model can then be used to design the implant plan. The software was designed using 3D Slicer, an open-source medical visualization platform. It was tested on CT data from thirteen patients. Results: The average time it took to create a skull model with restored anatomy using our software was 0.33 hours ± 0.04 STD. In comparison, the design time of the manual segmentation method took between 3 and 6 hours. To assess the structural accuracy of the reconstructed models, CT data from the thirteen patients was used to compare the models created using our software with those using the manual method. When registering the skull models together, the difference between each set of skulls was found to be 0.4 mm ± 0.16 STD. Conclusions: We have developed a software to design custom cranio-orbital implant plans, with a focus on thin bone structures. The method described decreases design time, and is of similar accuracy to the manual method.