953 resultados para proposed solutions
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
The increasing demand for Internet data traffic in wireless broadband access networks requires both the development of efficient, novel wireless broadband access technologies and the allocation of new spectrum bands for that purpose. The introduction of a great number of small cells in cellular networks allied to the complimentary adoption of Wireless Local Area Network (WLAN) technologies in unlicensed spectrum is one of the most promising concepts to attend this demand. One alternative is the aggregation of Industrial, Science and Medical (ISM) unlicensed spectrum to licensed bands, using wireless networks defined by Institute of Electrical and Electronics Engineers (IEEE) and Third Generation Partnership Project (3GPP). While IEEE 802.11 (Wi-Fi) networks are aggregated to Long Term Evolution (LTE) small cells via LTE / WLAN Aggregation (LWA), in proposals like Unlicensed LTE (LTE-U) and LWA the LTE air interface itself is used for transmission on the unlicensed band. Wi-Fi technology is widespread and operates in the same 5 GHz ISM spectrum bands as the LTE proposals, which may bring performance decrease due to the coexistence of both technologies in the same spectrum bands. Besides, there is the need to improve Wi-Fi operation to support scenarios with a large number of neighbor Overlapping Basic Subscriber Set (OBSS) networks, with a large number of Wi-Fi nodes (i.e. dense deployments). It is long known that the overall Wi-Fi performance falls sharply with the increase of Wi-Fi nodes sharing the channel, therefore there is the need for introducing mechanisms to increase its spectral efficiency. This work is dedicated to the study of coexistence between different wireless broadband access systems operating in the same unlicensed spectrum bands, and how to solve the coexistence problems via distributed coordination mechanisms. The problem of coexistence between different networks (i.e. LTE and Wi-Fi) and the problem of coexistence between different networks of the same technology (i.e. multiple Wi-Fi OBSSs) is analyzed both qualitatively and quantitatively via system-level simulations, and the main issues to be faced are identified from these results. From that, distributed coordination mechanisms are proposed and evaluated via system-level simulations, both for the inter-technology coexistence problem and intra-technology coexistence problem. Results indicate that the proposed solutions provide significant gains when compare to the situation without distributed coordination.
Resumo:
Development of Internet-of-Services will be hampered by heterogeneous Internet-of-Things infrastructures, such as inconsistency in communicating with participating objects, connectivity between them, topology definition & data transfer, access via cloud computing for data storage etc. Our proposed solutions are applicable to a random topology scenario that allow establishing of multi-operational sensor networks out of single networks and/or single service networks with the participation of multiple networks; thus allowing virtual links to be created and resources to be shared. The designed layers are context-aware, application-oriented, and capable of representing physical objects to a management system, along with discovery of services. The reliability issue is addressed by deploying IETF supported IEEE 802.15.4 network model for low-rate wireless personal networks. Flow- sensor succeeded better results in comparison to the typical - sensor from reachability, throughput, energy consumption and diversity gain viewpoint and through allowing the multicast groups into maximum number, performances can be improved.
Resumo:
Este relatório contempla um período de estágio de 8 meses numa empresa transformadora de bacalhau, com o objetivo de melhorar a sua competitividade. A análise revelou um planeamento informal à base da experiência, com custos associados – stocks excessivos ou escassos, falta de planeamento médio/longo prazo, tempos de espera, entre outros, provocando desperdícios a minimizar ou mesmo corrigir e um conjunto de problemas no processo produtivo que foram devidamente identificados. As propostas de solução contemplaram uma análise estratégica com o modelo de Porter e uma análise SWOT. Relativamente aos desperdícios identificados, utilizou-se a filosofia Lean, concretamente os 5S, para sugerir e implementar um vasto conjunto de melhorias.
Resumo:
O presente trabalho foi desenvolvido na empresa Renault CACIA e tem como fundamento a implementação de um fluxo de abastecimento com 4 horas de autonomia em todas as linhas de montagem no departamento de fabricação de componentes mecânicos. No entanto, estas linhas deverão ter condições para poder armazenar esse abastecimento, pelo que terão de ser implementadas estruturas que o suportem. Com o objetivo de eliminar o excesso de stock existente na linha de montagem de bombas de óleo, a mais crítica da instalação, e as atividades que não acrescentam valor ao produto final, organizar o espaço disponível, melhorar as condições ergonómicas, propõem-se soluções que serão uma mais-valia para as empresas de fabricação. Durante o desenvolvimento do trabalho foi realizado um estudo aprofundado da linha de montagem e dos problemas existentes no processo de abastecimento e, posteriormente, foi determinada a quantidade necessária de embalagens de componentes para a autonomia requerida. Recorreu-se à ferramenta CAD 3D, Solidworks®, para o planeamento das estruturas, e ao software de simulação Arena®, para testar o funcionamento da linha de montagem com a implementação das estruturas para abastecimento. Verificaram-se melhorias conseguidas através da implementação das soluções sugeridas. A linha de montagem ficou mais organizada e arrumada, tendo-se reduzido cerca de 86,96% de stock global existente. Associado a este, existiam atividades realizadas pelo operador de montagem que não acrescentavam valor ao produto final, tendo-se obtido um incremento da produção na ordem de 1%.
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
The last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.
Resumo:
This paper examines, through case studies, the organization of the production process of architectural projects in architecture offices in the city of Natal, specifically in relation to building projects. The specifics of the design process in architecture, the production of the project in a professional field in Natal, are studied in light of theories of design and its production process. The survey, in its different phases, was conducted between March 2010 and September 2012 and aimed to identify, understand, and analyze comparatively, by mapping the design process, the organization of production of building projects in two offices in Natal, checking as well the relationships of their agents during the process. The project was based on desk research and exploration, adopting, for both, data collection tools such as forms, questionnaires, and interviews. With the specific aim of mapping the design process, we adopted a technique that allows obtaining the information directly from employee agents involved in the production process. The technique consisted of registering information by completing daily, during or at the end of the workday, an individual virtual agenda, in which all agent collaborators described the tasks performed. The data collected allowed for the identification of the organizational structure of the office, its hierarchy, the responsibilities of agents, as well as the tasks performed by them during the two months of monitoring at each office. The research findings were based on analyses of data collected in the two offices and on comparative studies between the results of these analyses. The end result was a diagnostic evaluation that considered the level of organization and elaborated this perspective, as well as proposed solutions aimed at improving both the organization of the process and the relationships between the agents under the lens analyzed
Resumo:
Ye’elimite based cements have been studied since 70’s years in China, due to the irrelevant characteristics from a hydraulic and environmental point of view. One of them is the reduced fuel consumption, related to the lower temperature reaction required for this kind of cement production as compared to Ordinary Portland Cement (OPC), another characteristic is the reduced requirement of carbonates as a typical raw material, compared to OPC, with the consequent reduction in CO2 releases (~22%)from combustion. Thus, Belite-Ye’elimite-Ferrite (BYF) cements have been developed as potential OPC substitutes. BYF cements contain belite as main phase (>50 wt%) and ye´elimite as the second content phase (~30 wt%). However, an important technological problem is associated to them, related to the low mechanical strengths developed at intermediate hydration ages (3, 7 and 28 days). One of the proposed solutions to this problem is the activation of BYF clinkers by preparing clinkers with high percentage of coexisting alite and ye'elimite. These clinkers are known Belite-Alite-Ye’elimite (BAY) cements. Their manufacture would produce ~15% less CO2 than OPC. Alite is the main component of OPC and is responsible for early mechanical strengths. The reaction of alite and ye´elimite with water will develop cements with high mechanical strengths at early ages, while belite will contribute to later curing times. Moreover, the high alkalinity of BAY cement pastes/mortars/concretes may facilitate the use of supplementary cementitious materials with pozzolanic activity which also contributes to decrease the CO2 footprint of these ecocements. The main objective of this work was the design and optimization of all the parameters evolved in the preparation of a BAY eco-cement that develop higher mechanical strengths than BYF cements. These parameters include the selection of the raw materials (lime, gypsum, kaolin and sand), milling, clinkering conditions (temperature, and holding time), and clinker characterization The addition of fly ash has also been studied. All BAY clinker and pastes (at different hydration ages) were mineralogically characterized through laboratory X-ray powder diffraction (LXRPD) in combination with the Rietveld methodology to obtain the full phase assemblage including Amorphous and Crystalline non-quantified, ACn, contents. The pastes were also characterized through rheological measurements, thermal analyses (TA), scanning electronic microscopy (SEM) and nuclear magnetic resonance (NMR). The compressive strengths were also measured at different hydration times and compared to BYF.
Resumo:
This paper examines, through case studies, the organization of the production process of architectural projects in architecture offices in the city of Natal, specifically in relation to building projects. The specifics of the design process in architecture, the production of the project in a professional field in Natal, are studied in light of theories of design and its production process. The survey, in its different phases, was conducted between March 2010 and September 2012 and aimed to identify, understand, and analyze comparatively, by mapping the design process, the organization of production of building projects in two offices in Natal, checking as well the relationships of their agents during the process. The project was based on desk research and exploration, adopting, for both, data collection tools such as forms, questionnaires, and interviews. With the specific aim of mapping the design process, we adopted a technique that allows obtaining the information directly from employee agents involved in the production process. The technique consisted of registering information by completing daily, during or at the end of the workday, an individual virtual agenda, in which all agent collaborators described the tasks performed. The data collected allowed for the identification of the organizational structure of the office, its hierarchy, the responsibilities of agents, as well as the tasks performed by them during the two months of monitoring at each office. The research findings were based on analyses of data collected in the two offices and on comparative studies between the results of these analyses. The end result was a diagnostic evaluation that considered the level of organization and elaborated this perspective, as well as proposed solutions aimed at improving both the organization of the process and the relationships between the agents under the lens analyzed
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
STUDY BY MASS SPECTROMETRY OF SOLUTIONS OF [HYDROXY(TOSYLOXY)IODO]BENZENE: PROPOSED DISPROPORTIONATION MECHANISMS. Solutions of [hydroxy(tosyloxy)iodo]benzene (HTIB or Koser's reagent) in acetonitrile were analyzed using high resolution electrospray ionization mass spectrometry (ESI-MS) and electrospray ionization tandem mass spectrometry (ESI-MS/MS) under different conditions. Several species were characterized in these analyses. Based on these data, mechanisms were proposed for the disproportionation of the iodine(III) compounds in iodine(V) and iodine(I) species.
Resumo:
Caption title.