929 resultados para location-dependent data query
Resumo:
The 5,280 km2 Sian Ka’an Biosphere Reserve includes pristine wetlands fed by ground water from the karst aquifer of the Yucatan Peninsula, Mexico. The inflow through underground karst structures is hard to observe making it difficult to understand, quantify, and predict the wetland dynamics. Remotely sensed Synthetic Aperture Radar (SAR) amplitude and phase observations offer new opportunities to obtain information on hydrologic dynamics useful for wetland management. Backscatter amplitude of SAR data can be used to map flooding extent. Interferometric processing of the backscattered SAR phase data (InSAR) produces temporal phase-changes that can be related to relative water level changes in vegetated wetlands. We used 56 RADARSAT-1 SAR acquisitions to calculate 38 interferograms and 13 flooding maps with 24 day and 48 day time intervals covering July 2006 to March 2008. Flooding extent varied between 1,067 km2 and 2,588 km2 during the study period, and main water input was seen to take place in sloughs during October–December. We propose that main water input areas are associated with water-filled faults that transport ground water from the catchment to the wetlands. InSAR and Landsat data revealed local-scale water divides and surface water flow directions within the wetlands.
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
In the last thirty years, the emergence and progression of biologging technology has led to great advances in marine predator ecology. Large databases of location and dive observations from biologging devices have been compiled for an increasing number of diving predator species (such as pinnipeds, sea turtles, seabirds and cetaceans), enabling complex questions about animal activity budgets and habitat use to be addressed. Central to answering these questions is our ability to correctly identify and quantify the frequency of essential behaviours, such as foraging. Despite technological advances that have increased the quality and resolution of location and dive data, accurately interpreting behaviour from such data remains a challenge, and analytical methods are only beginning to unlock the full potential of existing datasets. This review evaluates both traditional and emerging methods and presents a starting platform of options for future studies of marine predator foraging ecology, particularly from location and two-dimensional (time-depth) dive data. We outline the different devices and data types available, discuss the limitations and advantages of commonly-used analytical techniques, and highlight key areas for future research. We focus our review on pinnipeds - one of the most studied taxa of marine predators - but offer insights that will be applicable to other air-breathing marine predator tracking studies. We highlight that traditionally-used methods for inferring foraging from location and dive data, such as first-passage time and dive shape analysis, have important caveats and limitations depending on the nature of the data and the research question. We suggest that more holistic statistical techniques, such as state-space models, which can synthesise multiple track, dive and environmental metrics whilst simultaneously accounting for measurement error, offer more robust alternatives. Finally, we identify a need for more research to elucidate the role of physical oceanography, device effects, study animal selection, and developmental stages in predator behaviour and data interpretation.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
Throughout the last years technologic improvements have enabled internet users to analyze and retrieve data regarding Internet searches. In several fields of study this data has been used. Some authors have been using search engine query data to forecast economic variables, to detect influenza areas or to demonstrate that it is possible to capture some patterns in stock markets indexes. In this paper one investment strategy is presented using Google Trends’ weekly query data from major global stock market indexes’ constituents. The results suggest that it is indeed possible to achieve higher Info Sharpe ratios, especially for the major European stock market indexes in comparison to those provided by a buy-and-hold strategy for the period considered.
Resumo:
During the last semester of the Master’s Degree in Artificial Intelligence, I carried out my internship working for TXT e-Solution on the ADMITTED project. This paper describes the work done in those months. The thesis will be divided into two parts representing the two different tasks I was assigned during the course of my experience. The First part will be about the introduction of the project and the work done on the admittedly library, maintaining the code base and writing the test suits. The work carried out is more connected to the Software engineer role, developing features, fixing bugs and testing. The second part will describe the experiments done on the Anomaly detection task using a Deep Learning technique called Autoencoder, this task is on the other hand more connected to the data science role. The two tasks were not done simultaneously but were dealt with one after the other, which is why I preferred to divide them into two separate parts of this paper.
Resumo:
The aim of this study was to evaluate the performance of the Centers for Dental Specialties (CDS) in the country and associations with sociodemographic indicators of the municipalities, structural variables of services and primary health care organization in the years 2004-2009. The study used secondary data from procedures performed in the CDS to the specialties of periodontics, endodontics, surgery and primary care. Bivariate analysis by χ2 test was used to test the association between the dependent variable (performance of the CDS) with the independents. Then, Poisson regression analysis was performed. With regard to the overall achievement of targets, it was observed that the majority of CDS (69.25%) performance was considered poor/regular. The independent factors associated with poor/regular performance of CDS were: municipalities belonging to the Northeast, South and Southeast regions, with lower Human Development Index (HDI), lower population density, and reduced time to deployment. HDI and population density are important for the performance of the CDS in Brazil. Similarly, the peculiarities related to less populated areas as well as regional location and time of service implementation CDS should be taken into account in the planning of these services.
Resumo:
To characterize the recently described SCI1 (stigma/style cell cycle inhibitor 1) gene relationship with the auxin pathway, we have taken the advantage of the Arabidopsis model system and its available tools. At first, we have analyzed the At1g79200 T-DNA insertion mutants and constructed various transgenic plants. The loss- and gain-of-function plants displayed cell number alterations in upper pistils that were controlled by the amino-terminal domain of the protein. These data also confirmed that this locus holds the functional homolog (AtSCI1) of the Nicotiana tabacum SCI1 gene. Then, we have provided some evidences the auxin synthesis/signaling pathways are required for downstream proper AtSCI1 control of cell number: (a) its expression is downregulated in yuc2yuc6 and npy1 auxin-deficient mutants, (b) triple (yuc2yuc6sci1) and double (npy1sci1) mutants mimicked the auxin-deficient phenotypes, with no synergistic interactions, and (c) the increased upper pistil phenotype in these last mutants, which is a consequence of an increased cell number, was able to be complemented by AtSCI1 overexpression. Taken together, our data strongly suggests SCI1 as a component of the auxin signaling transduction pathway to control cell proliferation/differentiation in stigma/style, representing a molecular effector of this hormone on pistil development.
Resumo:
The aim of this study was to investigate whether β-adrenoceptor (β-AR) overstimulation induced by in vivo treatment with isoproterenol (ISO) alters vascular reactivity and nitric oxide (NO) production and signaling in pulmonary arteries. Vehicle or ISO (0.3mgkg(-1)day(-1)) was administered daily to male Wistar rats. After 7days, the jugular vein was cannulated to assess right ventricular (RV) systolic pressure (SP) and end diastolic pressure (EDP). The extralobar pulmonary arteries were isolated to evaluate the relaxation responses, protein expression (Western blot), NO production (diaminofluorescein-2 fluorescence), and cyclic guanosine 3',5'-monophosphate (cGMP) levels (enzyme immunoassay kit). ISO treatment induced RV hypertrophy; however, no differences in RV-SP and EDP were observed. The pulmonary arteries from the ISO-treated group showed enhanced relaxation to acetylcholine that was abolished by the NO synthase (NOS) inhibitor N(ω)-nitro-l-arginine methyl ester (l-NAME); whereas relaxation elicited by sodium nitroprusside, ISO, metaproterenol, mirabegron, or KCl was not affected by ISO treatment. ISO-treated rats displayed enhanced endothelial NOS (eNOS) and vasodilator-stimulated phosphoprotein (VASP) expression in the pulmonary arteries, while phosphodiesterase-5 protein expression decreased. ISO treatment increased NO and cGMP levels and did not induce eNOS uncoupling. The present data indicate that β-AR overactivation enhances the endothelium-dependent relaxation of pulmonary arteries. This effect was linked to an increase in eNOS-derived NO production, cGMP formation and VASP content and to a decrease in phosphodiesterase-5 expression. Therefore, elevated NO bioactivity through cGMP/VASP signaling could represent a protective mechanism of β-AR overactivation on pulmonary circulation.
Resumo:
The caffeine solubility in supercritical CO2 was studied by assessing the effects of pressure and temperature on the extraction of green coffee oil (GCO). The Peng-Robinson¹ equation of state was used to correlate the solubility of caffeine with a thermodynamic model and two mixing rules were evaluated: the classical mixing rule of van der Waals with two adjustable parameters (PR-VDW) and a density dependent one, proposed by Mohamed and Holder² with two (PR-MH, two parameters adjusted to the attractive term) and three (PR-MH3 two parameters adjusted to the attractive and one to the repulsive term) adjustable parameters. The best results were obtained with the mixing rule of Mohamed and Holder² with three parameters.