950 resultados para Online services using open-source NLP tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The air-sea flux of greenhouse gases (e.g. carbon dioxide, CO2) is a critical part of the climate system and a major factor in the biogeochemical development of the oceans. More accurate and higher resolution calculations of these gas fluxes are required if we are to fully understand and predict our future climate. Satellite Earth observation is able to provide large spatial scale datasets that can be used to study gas fluxes. However, the large storage requirements needed to host such data can restrict its use by the scientific community. Fortunately, the development of cloud-computing can provide a solution. Here we describe an open source air-sea CO2 flux processing toolbox called the ‘FluxEngine’, designed for use on a cloud-computing infrastructure. The toolbox allows users to easily generate global and regional air-sea CO2 flux data from model, in situ and Earth observation data, and its air-sea gas flux calculation is user configurable. Its current installation on the Nephalae cloud allows users to easily exploit more than 8 terabytes of climate-quality Earth observation data for the derivation of gas fluxes. The resultant NetCDF data output files contain >20 data layers containing the various stages of the flux calculation along with process indicator layers to aid interpretation of the data. This paper describes the toolbox design, the verification of the air-sea CO2 flux calculations, demonstrates the use of the tools for studying global and shelf-sea air-sea fluxes and describes future developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypertension is a major risk factor for cardiovascular disease and mortality, and a growing global public health concern, with up to one-third of the world’s population affected. Despite the vast amount of evidence for the benefits of blood pressure (BP) lowering accumulated to date, elevated BP is still the leading risk factor for disease and disability worldwide. It is well established that hypertension and BP are common complex traits, where multiple genetic and environmental factors contribute to BP variation. Furthermore, family and twin studies confirmed the genetic component of BP, with a heritability estimate in the range of 30-50%. Contemporary genomic tools enabling the genotyping of millions of genetic variants across the human genome in an efficient, reliable, and cost-effective manner, has transformed hypertension genetics research. This is accompanied by the presence of international consortia that have offered unprecedentedly large sample sizes for genome-wide association studies (GWASs). While GWAS for hypertension and BP have identified more than 60 loci, variants in these loci are associated with modest effects on BP and in aggregate can explain less than 3% of the variance in BP. The aims of this thesis are to study the genetic and environmental factors that influence BP and hypertension traits in the Scottish population, by performing several genetic epidemiological analyses. In the first part of this thesis, it aims to study the burden of hypertension in the Scottish population, along with assessing the familial aggregation and heritialbity of BP and hypertension traits. In the second part, it aims to validate the association of common SNPs reported in the large GWAS and to estimate the variance explained by these variants. In this thesis, comprehensive genetic epidemiology analyses were performed on Generation Scotland: Scottish Family Health Study (GS:SFHS), one of the largest population-based family design studies. The availability of clinical, biological samples, self-reported information, and medical records for study participants has allowed several assessments to be performed to evaluate factors that influence BP variation in the Scottish population. Of the 20,753 subjects genotyped in the study, a total of 18,470 individuals (grouped into 7,025 extended families) passed the stringent quality control (QC) criteria and were available for all subsequent analysis. Based on the BP-lowering treatment exposure sources, subjects were further classified into two groups. First, subjects with both a self-reported medications (SRMs) history and electronic-prescription records (EPRs; n =12,347); second, all the subjects with at least one medication history source (n =18,470). In the first group, the analysis showed a good concordance between SRMs and EPRs (kappa =71%), indicating that SRMs can be used as a surrogate to assess the exposure to BP-lowering medication in GS:SFHS participants. Although both sources suffer from some limitations, SRMs can be considered the best available source to estimate the drug exposure history in those without EPRs. The prevalence of hypertension was 40.8% with higher prevalence in men (46.3%) compared to women (35.8%). The prevalence of awareness, treatment and controlled hypertension as defined by the study definition were 25.3%, 31.2%, and 54.3%, respectively. These findings are lower than similar reported studies in other populations, with the exception of controlled hypertension prevalence, which can be considered better than other populations. Odds of hypertension were higher in men, obese or overweight individuals, people with a parental history of hypertension, and those living in the most deprived area of Scotland. On the other hand, deprivation was associated with higher odds of treatment, awareness and controlled hypertension, suggesting that people living in the most deprived area may have been receiving better quality of care, or have higher comorbidity levels requiring greater engagement with doctors. These findings highlight the need for further work to improve hypertension management in Scotland. The family design of GS:SFHS has allowed family-based analysis to be performed to assess the familial aggregation and heritability of BP and hypertension traits. The familial correlation of BP traits ranged from 0.07 to 0.20, and from 0.18 to 0.34 for parent-offspring pairs and sibling pairs, respectively. A higher correlation of BP traits was observed among first-degree relatives than other types of relative pairs. A variance-component model that was adjusted for sex, body mass index (BMI), age, and age-squared was used to estimate heritability of BP traits, which ranged from 24% to 32% with pulse pressure (PP) having the lowest estimates. The genetic correlation between BP traits showed a high correlation between systolic (SBP), diastolic (DBP) and mean arterial pressure (MAP) (G: 81% to 94%), but lower correlations with PP (G: 22% to 78%). The sibling recurrence risk ratio (λS) for hypertension and treatment were calculated as 1.60 and 2.04 respectively. These findings confirm the genetic components of BP traits in GS:SFHS, and justify further work to investigate genetic determinants of BP. Genetic variants reported in the recent large GWAS of BP traits were selected for genotyping in GS:SFHS using a custom designed TaqMan® OpenArray®. The genotyping plate included 44 single nucleotide polymorphisms (SNPs) that have been previously reported to be associated with BP or hypertension at genome-wide significance level. A linear mixed model that is adjusted for age, age-squared, sex, and BMI was used to test for the association between the genetic variants and BP traits. Of the 43 variants that passed the QC, 11 variants showed statistically significant association with at least one BP trait. The phenotypic variance explained by these variant for the four BP traits were 1.4%, 1.5%, 1.6%, and 0.8% for SBP, DBP, MAP, and PP, respectively. The association of genetic risk score (GRS) that were constructed from selected variants has showed a positive association with BP level and hypertension prevalence, with an average effect of one mmHg increase with each 0.80 unit increases in the GRS across the different BP traits. The impact of BP-lowering medication on the genetic association study for BP traits has been established, with typical practice of adding a fixed value (i.e. 15/10 mmHg) to the measured BP values to adjust for BP treatment. Using the subset of participants with the two treatment exposure sources (i.e. SRMs and EPRs), the influence of using either source to justify the addition of fixed values in SNP association signal was analysed. BP phenotypes derived from EPRs were considered the true phenotypes, and those derived from SRMs were considered less accurate, with some phenotypic noise. Comparing SNPs association signals between the four BP traits in the two model derived from the different adjustments showed that MAP was the least impacted by the phenotypic noise. This was suggested by identifying the same overlapped significant SNPs for the two models in the case of MAP, while other BP traits had some discrepancy between the two sources

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente trabajo empleó herramientas de hardware y software de licencia libre para el establecimiento de una estación base celular (BTS) de bajo costo y fácil implementación. Partiendo de conceptos técnicos que facilitan la instalación del sistema OpenBTS y empleando el hardware USRP N210 (Universal Software Radio Peripheral) permitieron desplegar una red análoga al estándar de telefonía móvil (GSM). Usando los teléfonos móviles como extensiones SIP (Session Initiation Protocol) desde Asterisk, logrando ejecutar llamadas entre los terminales, mensajes de texto (SMS), llamadas desde un terminal OpenBTS hacia otra operadora móvil, entre otros servicios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interest to small and media size enterprises’ (SMEs) internationalization process is increasing with a growth of SMEs’ contribution to GDP. Internet gives an opportunity to provide variety of services online and reach market niche worldwide. The overlapping of SMEs’ internationalization and online services is the main issue of the research. The most SMEs internationalize according to intuitive decisions of CEO of the company and lose limited resources to worthless attempts. The purpose of this research is to define effective approaches to online service internationalization and selection of the first international market. The research represents single holistic case study of local massive open online courses (MOOCs) platform going global. It considers internationalization costs and internationalization theories applicable to online services. The research includes preliminary screening of the markets and in-depth analysis based on macro parameters of the market and specific characteristics of the customers and expert evaluation of the results. The specific issues as GILT (Globalization, Internationalization, Localization and Translation) approach and Internet-enabled internationalization are considered. The research results include recommendations on international market selection methodology for online services and for effective internationalization strategy development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on a variation of the Art Gallery problem that considers open-edge guards and open mobile-guards. A mobile guard can be placed on edges and diagonals of a polygon, and the ‘open’ prefix means that the endpoints of such an edge or diagonal are not taken into account for visibility purposes. This paper studies the number of guards that are sufficient and sometimes necessary to guard some classes of simple polygons for both open-edge and open mobile-guards. A wide range of polygons is studied, which include orthogonal polygons with or without holes, spirals, orthogonal spirals and monotone polygons. Moreover, this problem is also considered for planar triangulation graphs using open-edge guards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese (doutorado)—Universidade de Brasília, Faculdade de Educação, Programa de Pós-graduação em Educação, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phylogenetic inference consist in the search of an evolutionary tree to explain the best way possible genealogical relationships of a set of species. Phylogenetic analysis has a large number of applications in areas such as biology, ecology, paleontology, etc. There are several criterias which has been defined in order to infer phylogenies, among which are the maximum parsimony and maximum likelihood. The first one tries to find the phylogenetic tree that minimizes the number of evolutionary steps needed to describe the evolutionary history among species, while the second tries to find the tree that has the highest probability of produce the observed data according to an evolutionary model. The search of a phylogenetic tree can be formulated as a multi-objective optimization problem, which aims to find trees which satisfy simultaneously (and as much as possible) both criteria of parsimony and likelihood. Due to the fact that these criteria are different there won't be a single optimal solution (a single tree), but a set of compromise solutions. The solutions of this set are called "Pareto Optimal". To find this solutions, evolutionary algorithms are being used with success nowadays.This algorithms are a family of techniques, which aren’t exact, inspired by the process of natural selection. They usually find great quality solutions in order to resolve convoluted optimization problems. The way this algorithms works is based on the handling of a set of trial solutions (trees in the phylogeny case) using operators, some of them exchanges information between solutions, simulating DNA crossing, and others apply aleatory modifications, simulating a mutation. The result of this algorithms is an approximation to the set of the “Pareto Optimal” which can be shown in a graph with in order that the expert in the problem (the biologist when we talk about inference) can choose the solution of the commitment which produces the higher interest. In the case of optimization multi-objective applied to phylogenetic inference, there is open source software tool, called MO-Phylogenetics, which is designed for the purpose of resolving inference problems with classic evolutionary algorithms and last generation algorithms. REFERENCES [1] C.A. Coello Coello, G.B. Lamont, D.A. van Veldhuizen. Evolutionary algorithms for solving multi-objective problems. Spring. Agosto 2007 [2] C. Zambrano-Vega, A.J. Nebro, J.F Aldana-Montes. MO-Phylogenetics: a phylogenetic inference software tool with multi-objective evolutionary metaheuristics. Methods in Ecology and Evolution. En prensa. Febrero 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to develop a methodology for analysis of images using overlapping, which assists in identification of microstructural features in areas of titanium, which may be associated with its biological response. That way, surfaces of titanium heat treated for 08 (eight) different ways have been subjected to a test culture of cells. It was a relationship between the grain, texture and shape of grains of surface of titanium (attacked) trying to relate to the process of proliferation and adhesion. We used an open source software for cell counting adhered to the surface of titanium. The juxtaposition of images before and after cell culture was obtained with the aid of micro-hardness of impressions made on the surface of samples. From this image where there is overlap, it is possible to study a possible relationship between cell growth with microstructural characteristics of the surface of titanium. This methodology was efficient to describe a set of procedures that are useful in the analysis of surfaces of titanium subjected to a culture of cells

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of the wind flow around buildings has a great interest from the point of view of the wind energy assessment, pollutant dispersion control, natural ventilation and pedestrians wind comfort and safety. Since LES turbulence models are computationally time consuming when applied to real geometries, RANS models are still widely used. However, RANS models are very sensitive to the chosen turbulence parametrisation and the results can vary according to the application. In this investigation, the simulation of the wind flow around an isolated building is performed using various types of RANS turbulence models in the open source code OpenFOAM, and the results are compared with benchmark experimental data. In order to confirm the numerical accuracy of the simulations, a grid dependency analysis is performed and the convergence index and rate are calculated. Hit rates are calculated for all the cases and the models that successfully pass a validation criterion are analysed at different regions of the building roof, and the most accurate RANS models for the modelling of the flow at each region are identified. The characteristics of the wind flow at each region are also analysed from the point of view of the wind energy generation, and the most adequate wind turbine model for the wind energy exploitation at each region of the building roof is chosen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This panel presentation provided several use cases that detail the complexity of large-scale digital library system (DLS) migration from the perspective of three university libraries and a statewide academic library services consortium. Each described the methodologies developed at the beginning of their migration process, the unique challenges that arose along the way, how issues were managed, and the outcomes of their work. Florida Atlantic University, Florida International University, and the University of Central Florida are members of the state's academic library services consortium, the Florida Virtual Campus (FLVC). In 2011, the Digital Services Committee members began exploring alternatives to DigiTool, their shared FLVC hosted DLS. After completing a review of functional requirements and existing systems, the universities and FLVC began the implementation process of their chosen platforms. Migrations began in 2013 with limited sets of materials. As functionalities were enhanced to support additional categories of materials from the legacy system, migration paths were created for the remaining materials. Some of the challenges experienced with the institutional and statewide collaborative legacy collections were due to gradual changes in standards, technology, policies, and personnel. This was manifested in the quality of original digital files and metadata, as well as collection and record structures. Additionally, the complexities involved with multiple institutions collaborating and compromising throughout the migration process, as well as the move from a consortial support structure with a vendor solution to open source systems (both locally and consortially supported), presented their own sets of unique challenges. Following the presentation, the speakers discussed commonalities in their migration experience, including learning opportunities for future migrations.