916 resultados para sensor-Cloud system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the repeatability, reproducibility and accuracy of a new hyperspectral system based on a pushbroom sensor as a means of measuring spectral features and color of materials and objects. The hyperspectral system consisted of a CCD camera, a spectrograph and an objective lens. An additional linear moving system allowed the mechanical scanning of the complete scene. A uniform overhead luminaire with daylight configuration was used to irradiate the scene using d:45 geometry. We followed the guidelines of the ASTM E2214-08 Standard Practice for Specifying and Verifying the Performance of Color-Measuring Instruments that define the standards and latest multidimensional procedures. The results obtained are analyzed in-depth and compared to those recently reported by other authors for spectrophotometers and multispectral systems. It can be concluded that hyperspectral systems are reliable and can be used in the industry to perform spectral and color readings with a high spatial resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, faced with the constant rise of the Smart cities around the world, there is an exponential increase of the use and deployment of information technologies in the cities. The intensive use of Information Technology (IT) in these ecosystems facilitates and improves the quality of life of citizens, but in these digital communities coexist individuals whose health is affected developing or increasing diseases such as electromagnetic hypersensitivity. In this paper we present a monitoring, detection and prevention system to help this group, through which it is reported the rates of electromagnetic radiation in certain areas, based on the information that the own Smart City gives us. This work provides a perfect platform for the generation of predictive models for detection of future states of risk for humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interactivos, realizada sob a orientação científica do Doutor Osvaldo Arede dos Santos, Professor Adjunto da Unidade Técnico Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of the embedded application and driving assistance systems, it becomes relevant to develop parallel mechanisms in order to check and to diagnose these new systems. In this thesis we focus our research on one of this type of parallel mechanisms and analytical redundancy for fault diagnosis of an automotive suspension system. We have considered a quarter model car passive suspension model and used a parameter estimation, ARX model, method to detect the fault happening in the damper and spring of system. Moreover, afterward we have deployed a neural network classifier to isolate the faults and identifies where the fault is happening. Then in this regard, the safety measurements and redundancies can take into the effect to prevent failure in the system. It is shown that The ARX estimator could quickly detect the fault online using the vertical acceleration and displacement sensor data which are common sensors in nowadays vehicles. Hence, the clear divergence is the ARX response make it easy to deploy a threshold to give alarm to the intelligent system of vehicle and the neural classifier can quickly show the place of fault occurrence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Otto-von-Guericke-Universität Magdeburg, Fakultät für Informatik, Dissertation, 2016

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most global ocean models are based on the assumption of a "steady state" ocean. Here, we investigate the validation of this hypothesis for the anthropized Mediterranean Sea. In order to do so, we calculated the mixing coefficients of the water masses detected in this sea via an optimum multiparameter analysis referred to as the MIX approach, using data from the BOUM (2008) and MedSeA (2013) cruises. The comparison of the mixing coefficients of each water mass, between 2008 and 2013, indicates that some of their proportions have significantly changed. Surface water mass proportions did not change significantly (Delta0.05-0.1), while intermediate and deep water mass mixing coefficients of both Eastern and Western basins were significantly modified (~Delata 0.35). This study clearly shows that the Mediterranean seawater is not in a "steady state".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Although both strength training (ST) and endurance training (ET) seem to be beneficial in type 2 diabetes mellitus (T2D), little is known about post-exercise glucose profiles. The objective of the study was to report changes in blood glucose (BG) values after a 4-month ET and ST programme now that a device for continuous glucose monitoring has become available. Materials and methods Fifteen participants, comprising four men age 56.5 +/- 0.9 years and 11 women age 57.4 +/- 0.9 years with T2D, were monitored with the MiniMed (Northridge, CA, USA) continuous glucose monitoring system (CGMS) for 48 h before and after 4 months of ET or ST. The ST consisted of three sets at the beginning, increasing to six sets per week at the end of the training period, including all major muscle groups and ET performed with an intensity of maximal oxygen uptake of 60% and a volume beginning at 15 min and advancing to a maximum of 30 min three times a week. Results A total of 17 549 single BG measurements pretraining (619.7 +/- 39.8) and post-training (550.3 +/- 30.1) were recorded, correlating to an average of 585 +/- 25.3 potential measurements per participant at the beginning and at the end of the study. The change in BG-value between the beginning (132 mg dL(-1)) and the end (118 mg dL(-1)) for all participants was significant (P = 0.028). The improvement in BG-value for the ST programme was significant (P = 0.02) but for the ET no significant change was measured (P = 0.48). Glycaemic control improved in the ST group and the mean BG was reduced by 15.6% (Cl 3-25%). Conclusion In conclusion, the CGMS may be a useful tool in monitoring improvements in glycaemic control after different exercise programmes. Additionally, the CGMS may help to identify asymptomatic hypoglycaemia or hyperglycaemia after training programmes.