929 resultados para location-dependent data query
Resumo:
A crucial step in lysosomal biogenesis is catalyzed by “uncovering” enzyme (UCE), which removes a covering N-acetylglucosamine from the mannose 6-phosphate (Man-6-P) recognition marker on lysosomal hydrolases. This study shows that UCE resides in the trans-Golgi network (TGN) and cycles between the TGN and plasma membrane. The cytosolic domain of UCE contains two potential endocytosis motifs: 488YHPL and C-terminal 511NPFKD. YHPL is shown to be the more potent of the two in retrieval of UCE from the plasma membrane. A green-fluorescent protein-UCE transmembrane-cytosolic domain fusion protein colocalizes with TGN 46, as does endogenous UCE in HeLa cells, showing that the transmembrane and cytosolic domains determine intracellular location. These data imply that the Man-6-P recognition marker is formed in the TGN, the compartment where Man-6-P receptors bind cargo and are packaged into clathrin-coated vesicles.
Resumo:
Physical location of data in cloud storage is a problem that gains a lot of attention not only from the actual cloud providers but also from the end users' who lately raise many concerns regarding the privacy of their data. It is a common practice that cloud service providers create replicate users' data across multiple physical locations. However, moving data in different countries means that basically the access rights are transferred based on the local laws of the corresponding country. In other words, when a cloud service provider stores users' data in a different country then the transferred data is subject to the data protection laws of the country where the servers are located. In this paper, we propose LocLess, a protocol which is based on a symmetric searchable encryption scheme for protecting users' data from unauthorized access even if the data is transferred to different locations. The idea behind LocLess is that "Once data is placed on the cloud in an unencrypted form or encrypted with a key that is known to the cloud service provider, data privacy becomes an illusion". Hence, the proposed solution is solely based on encrypting data with a key that is only known to the data owner.
Resumo:
Direct alcohol fuel cells (DAFCs) mostly use low molecular weight alcohols such as methanol and ethanol as fuels. However, short-chain alcohol molecules have a relative high membrane crossover rate in DAFCs and a low energy density. Long chain alcohols such as butanol have a higher energy density, as well as a lower membrane crossover rate compared to methanol and ethanol. Although a significant number of studies have been dedicated to low molecular weight alcohols in DAFCs, very few studies are available for longer chain alcohols such as butanol. A significant development in the production of biobutanol and its proposed application as an alternative fuel to gasoline in the past decade makes butanol an interesting candidate fuel for fuel cells. Different butanol isomers were compared in this study on various Pt and PtSn bimetallic catalysts for their electro-oxidation activities in acidic media. Clear distinctive behaviors were observed for each of the different butanol isomers using cyclic voltammetry (CV), indicating a difference in activity and the mechanism of oxidation. The voltammograms of both n-butanol and iso-butanol showed similar characteristic features, indicating a similar reaction mechanism, whereas 2-butanol showed completely different features; for example, it did not show any indication of poisoning. Ter-butanol was found to be inactive for oxidation on Pt. In situ FTIR and CV analysis showed that OHads was essential for the oxidation of primary butanol isomers which only forms at high potentials on Pt. In order to enhance the water oxidation and produce OHads at lower potentials, Pt was modified by the oxophilic metal Sn and the bimetallic PtSn was studied for the oxidation of butanol isomers. A significant enhancement in the oxidation of the 1° butanol isomers was observed on addition of Sn to the Pt, resulting in an oxidation peak at a potential ∼520 mV lower than that found on pure Pt. The higher activity of PtSn was attributed to the bifunctional mechanism on PtSn catalyst. The positive influence of Sn was also confirmed in the PtSn nanoparticle catalyst prepared by the modification of commercial Pt/C nanoparticle and a higher activity was observed for PtSn (3:1) composition. The temperature-dependent data showed that the activation energy for butanol oxidation reaction over PtSn/C is lower than that over Pt/C.
Resumo:
Esta base de datos contiene 30738 registros referenciales de la prensa religiosa costarricense, en formato CEPAL, elaborados utilizando el programa Microisis, en la Escuela Ecuménica de la Religión de la Universidad Nacional.Se presenta la experiencia y los procedimientos utilizados para la normalización de la base de datos Cabsys, desde la identificación y verificación de los artículos ingresados desde 1986, directamente de las fuentes originales: los periódicos religiosos, la complementación de los títulos, la elaboración de un archivo de autoridad de autor, la corroboración de los datos referenciales de ubicación; la normalización de los descriptores y la elaboración de un archivo de autoridad de materia.Se presentan cuadros ilustrativos sobre los artículos incluidos, por ejemplo: distribución de trabajos por autor, distribución por género, distribución por año de publicación, temas más representativos según descriptores asignados y correlación entre estos y las palabras del título de los artículos.
Resumo:
Se han desarrollado varios modelos prometedores para la captura digital de datos de movilidad, que pueden ser aplicados en la planificación urbana, de transporte y de ordenamiento territorial. Por ello el objetivo de este trabajo es desarrollar una metodología que recolecte información de movilidad con la cual se generen matrices Origen-Destino (OD) y de tiempos de viajes, además que identifique puntos de interés, modos y rutas frecuentes de viaje mediante el desarrollo e implementación de una aplicación para dispositivos móviles Android. Metodología: Se produjo una aplicación para dispositivos móviles con sistema operativo Android, en base a modelos existentes. Esta aplicación obtuvo datos de movilidad a partir de los sensores de localización incorporados en los móviles (GPS), para su posterior migración a una base de datos en la nube y consiguiente post proceso con herramientas de análisis como KNIME, Python y QuantumGis. La aplicación fue probada por 68 estudiantes voluntarios de la Universidad de Cuenca, durante 14 días del mes de enero de 2016. Resultados: Con la información completa de 44 participantes se obtuvieron matrices OD y de tiempos de viajes para diferentes períodos del día, las cuales permitieron identificar variaciones de interacción entre zonas, variaciones de número y tiempo de viajes. Fueron reconocidos también modos de transporte como caminata, bicicleta y motorizados para una sub muestra (n=6). Se detectaron los POIs Residencia (91%), Trabajo/Estudio (74%) y puntos intermedios (20% del total de POIs) y se logró observar comportamientos de movilidad atípico. Finalmente se compararon las rutas más frecuentadas por los usuarios con las rutas óptimas teóricas calculadas, encontrando que el 63.6% de los usuarios coincidían con el recorrido de estas últimas. Conclusiones: El método planteado presenta coherencia con trabajos previos, mostrando niveles de confianza equiparables. El mayor reto es la implementación masiva del modelo creado para la recolección de datos útiles para planes de movilidad.
Resumo:
The Earth we know today was not always so. Over millions of years have undergone significant ch an g e s brought about by numerous geological phenomena aimed at your balance, some internal order, creating new geological formations and other external order smoothing formations previously created. From t h e tectonic standpoint, Angola is located in a relatively stable area which gives it a certain p ri v i l e g e w h e n compared with some Asian countries or even Americans where quite often occur earthquakes and volcanic eruptions. However, the same cannot be said in relation to the occurrence of an external geodynamics phenomena, such as the ravines, which in recent years has taken shape in many provinces, especially due to anthropogenic activity, giving rise to geological hazards, increasing the risk of damage in buildings and others infrastructures, losses direct or indirect in economic activities and loss of human lives. We understand that the reducing of these risks starts, in particular, by their identification, for later take preventive measures. This work is the result of some research work carried out by the authors through erosion courses of s o i l and stabilization of soils subject to erosion phenomena, carried out by Engineering Laboratory of Angola (LEA). For the realization of this work, we resorted to cartographic data query, literature, listening to s o m e o f the provincial representatives and local residents, as well as the observation in lo co o f s o m e af f e ct ed areas. The results allow us to infer that the main provinces affected by ravine phenomenon are located in Central and Northern highlands, as well as in the eastern region, and more recently in Cuando-Cub an go province. Not ruling out, however, other regions, such as in Luanda and Cabinda [1]. Relatively the causes, we can say that the ravines in Angola are primarily due to the combination of three natural factors: climate, topography and type of soil [2]. When we add the anthropogenic activit y , namely the execution of construction works, the drainage system obstructio n, exploration of m i n e ral s, agriculture and fires, it is verified an increasing of the phenomenon, often requiring immedi at e act i o n . These interventions can be done through structural or engineering measures and by the stabilization measures on the degraded soil cover [3]. We present an example of stabilization measures throu g h t h e deployment of a local vegetation called Pennisetum purpureum. It is expected that the results may contribute to a better understanding of the causes of the ravine phenomenon in Angola and that the adopted stabilization method can be adapted in other affected provinces in order to prevent and making the contention of the ravines.
Resumo:
We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 2003–2007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.
Resumo:
Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.
Resumo:
This paper presents an investigation into cloud-to-ground lightning activity over the continental territory of Portugal with data collected by the national Lightning Location System. The Lightning Location System in Portugal is first presented. Analyses about geographical, seasonal, and polarity distribution of cloud-to-ground lightning activity and cumulative probability of peak current are carried out. An overall ground flash density map is constructed from the database, which contains the information of more than five years and almost four million records. This map is compared with the thunderstorm days map, produced by the Portuguese Institute of Meteorology, and with the orographic map of Portugal. Finally, conclusions are duly drawn.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
ABSTRACT: BACKGROUND: Although smokers tend to have a lower body-mass index than non-smokers, smoking may favour abdominal body fat accumulation. To our knowledge, no population-based studies have assessed the relationship between smoking and body fat composition. We assessed the association between cigarette smoking and waist circumference, body fat, and body-mass index. METHODS: Height, weight, and waist circumference were measured among 6,123 Caucasians (ages 35-75) from a cross-sectional population-based study in Switzerland. Abdominal obesity was defined as waist circumference>=102 cm for men and >=88 cm for women. Body fat (percent total body weight) was measured by electrical bioimpedance. Age- and sex-specific body fat cut-offs were used to define excess body fat. Cigarettes smoked per day were assessed by self-administered questionnaire. Age-adjusted means and odds ratios were calculated using linear and logistic regression. RESULTS: Current smokers (29% of men and 24% of women) had lower mean waist circumference, body fat percentage, and body-mass index compared with non-smokers. Age-adjusted mean waist circumference and body fat increased with cigarettes smoked per day among smokers. The association between cigarettes smoked per day and body-mass index was non-significant. Compared with light smokers, the adjusted odds ratio (OR) for abdominal obesity in men was 1.28 (0.78-2.10) for moderate smokers and 1.94 (1.15-3.27) for heavy smokers (P=0.03 for trend), and 1.07 (0.72-1.58) and 2.15 (1.26-3.64) in female moderate and heavy smokers, respectively (P<0.01 for trend). Compared with light smokers, the OR for excess body fat in men was 1.05 (95% CI: 0.58-1.92) for moderate smokers and 1.15 (0.60-2.20) for heavy smokers (P=0.75 for trend) and 1.34 (0.89-2.00) and 2.11 (1.25-3.57), respectively in women (P=0.07 for trend). CONCLUSION: Among smokers, cigarettes smoked per day were positively associated with central fat accumulation, particularly in women.
Resumo:
PURPOSE: To describe the characteristics of reversible focal pleural thickenings (PTs) mimicking real plaques, that firstly suggest asbestos exposure or pleural metastasis; to propose an imaging strategy and propose an explanation for their mechanism of formation. PATIENTS AND METHODS: Retrospective review of data from 19 patients with PTs fitting the description of pleural plaques at chest computed tomography (CT) and presenting modifications (clearance or appearance) of at least one PT at an additional chest examination in prone position. RESULTS: A total of 152 PTs were recorded on the first chest CT examinations with a range of two to 19 pleural opacities per patient. All PTs had a posterior distribution in the lower lobes. On the additional acquisitions, 144 PTs disappeared. Seventeen patients presented complete regression of PTs and two patients presented persistence of eight PTs. CONCLUSION: Additional low dose acquisition in prone position should be performed in all patients presenting with focal PT in a dependent and basal location. This may allow to exclude a pleural plaque in case of asbestos exposure but also a pleural metastasis in oncologic patients. These reversible dependent PTs could be related to physiological focal accumulation of lymphatic fluid in subpleural area.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.