960 resultados para Psychology Data processing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aims viewing weather information, by building isosurfaces enabling enjoy the advantages of three-dimensional geometric models, to communicate the meaning of the data used in a clear and efficient way. The evolving technology of data processing makes possible the interpretation of masses of data increasing, through robust algorithms. In meteorology, in particular, we can benefit from this fact, due to the large amount of data required for analysis and statistics. The manipulation of data, by users from other areas, is facilitated by the choice of algorithm and the tools involved in this work. The project was further developed into distinct modules, increasing their flexibility and reusability for future studies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the rapid growth of the use of Web applications in various fields of knowledge, the term Web service enter into evidence in the current scenario, which refers to services from different origins and purpose, offered through local networks and also available in some cases, on the Internet. The architecture of this type of application offers data processing on server side thereby, running applications and complex and slow processes is very interesting, which is the case with most algorithms involving visualization. The VTK is a library intended for visualization, and features a large variety of methods and algorithms for this purpose, but with a graphics engine that requires processing capacity. The union of these two resources can bring interesting results and contribute for performance improvements in the VTK library. This study is discussed in this project, through testing and communication overhead analysis

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The X-ray fluorescence analysis (XRF) is an important technique for the qualitative and quantitative determination of chemical components in a sample. It is based on measurement of the intensity of the emitted characteristic radiation by the elements of the sample, after being properly excited. One of the modalities of this technique is the total reflection x-ray fluorescence (TXRF). In TXRF, the angle of refraction of the incident beam tends to zero and the refracted beam is tangent to the sample-support interface. Thus, there is a minimum angle of incidence that there is no refracted beam and all the incident radiation undergoes total reflection. As it is implemented in very small samples, in a film format, self-absorption effects should not very relevant. In this study, we evaluated the feasibility of using code MCNPX (Monte Carlo N - Particle eXtended), to simulate a measure implemented by the TXRF technique. In this way, it was verified the quality of response of a system by TXRF spectroscopy using synchrotron radiation as excitation beam for a simple setup, by retrieving the characteristic energies and the concentrations of the elements in the sample. The steps of data processing, after obtaining the excitation spectra, were the same as in a real experiment and included the obtaining of the sensitivity curve for the simulated system. The agreement between the theoretical and simulated values of Ka characteristic energies for different elements was lower than 1 % .The obtained concentration of the elements of the sample had high relatively errors ( between 6 and 60 % ) due mainly to lack of knowing about some realistic physical parameters of the sample , such as density . In this way, this result does not preclude the use of MCNPX code for this type of application

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimation of tropospheric gradients in GNSS data processing is a well-known technique to improve positioning (e.g. Bar-Sever et al., 1998; Chen and Herring, 1997). More recently, several authors also focused on the estimation of such parameters for meteorological studies and demonstrated their potential benefits (e.g. Champollion et al., 2004). Today, they are routinely estimated by several global and regional GNSS analysis centres but they are still not yet used for operational meteorology.This paper discusses the physical meaning of tropospheric gradients estimated from GPS observations recorded in 2011 by 13 permanent stations located in Corsica Island (a French Island in the western part of Italy). Corsica Island is a particularly interesting location for such study as it presents a significant environmental contrast between the continent and the sea, as well as a steep topography.Therefore, we estimated Zenith Total Delay (ZTD) and tropospheric gradients using two software: GAMIT/GLOBK (GAMIT version 10.5) and GIPSY-OASIS II version 6.1. Our results are then compared to radiosonde observations and to the IGS final troposphere products. For all stations we found a good agreement between the ZWD estimated by the two software (the mean of the ZWD differences is 1 mm with a standard deviation of 6 mm) but the tropospheric gradients are in less good agreement (the mean of the gradient differences is 0.1 mm with a standard deviation of 0.7 mm), despite the differences in the processing strategy (double-differences for GAMIT/GLOBK versus zero-difference for GIPSY-OASIS).We also observe that gradient amplitudes are correlated with the seasonal behaviour of the humidity. Like ZWD estimates, they are larger in summer than in winter. Their directions are stable over the time but not correlated with the IWV anomaly observed by ERA-Interim. Tropospheric gradients observed at many sites always point to inland throughout the year. These preferred directions are almost opposite to the largest slope of the local topography as derived from the world Digital Elevation Model ASTER GDEM v2. These first results give a physical meaning to gradients but the origin of such directions need further investigations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The collection of prices for basic goods supply is very important for the population, based on the collection and processing of these data the CLI (Cost Living Index) is calculated among others, helping consumers to shop more rationally and with a clearer view of each product impact of each product on their household budget, not only food, but also cleaning products and personal hygiene ones. Nowadays, the project of collection of prices for basic goods supply is conducted weekly in Botucatu - SP through a spreadsheet. The aim of this work was to develop a software which utilized mobile devices in the data collection and storage phase, concerning the basic goods supply in Botucatu -SP. This was created in order to eliminate the need of taking notes in paper spreadsheets, increasing efficiency and accelerating the data processing. This work utilized the world of mobile technology and development tools, through the platform".NET" - Compact Framework and programming language Visual Basic".NET" was used in the handheld phase, enabling to develop a system using techniques of object oriented programming, with higher speed and reliability in the codes writing. A HP Pavilion dv3 personal computer and an Eten glofish x500+ handheld computer were used. At the end of the software development, collection, data storing and processing in a report, the phase of in loco paper spreadsheets were eliminated and it was possible to verify that the whole process was faster, more consistent, safer, more efficient and the data were more available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Degeneration of tendon tissue is a common cause of tendon dysfunction with the symptoms of repeated episodes of pain and palpable increase of tendon thickness. Tendon mechanical properties are directly related to its physiological composition and the structural organization of the interior collagen fibers which could be altered by tendon degeneration due to overuse or injury. Thus, measuring mechanical properties of tendon tissue may represent a quantitative measurement of pain, reduced function, and tissue health. Ultrasound elasticity imaging has been developed in the last two decades and has proved to be a promising tool for tissue elasticity imaging. To date, however, well established protocols of tendinopathy elasticity imaging for diagnosing tendon degeneration in early stages or late stages do not exist. This thesis describes the re-creation of one dynamic ultrasound elasticity imaging method and the development of an ultrasound transient shear wave elasticity imaging platform for tendon and other musculoskeletal tissue imaging. An experimental mechanical stage with proper supporting systems and accurate translating stages was designed and made. A variety of high-quality tissue-mimicking phantoms were made to simulate homogeneous and heterogeneous soft tissues as well as tendon tissues. A series of data acquisition and data processing programs were developed to collect the displacement data from the phantom and calculate the shear modulus and Young’s modulus of the target. The imaging platform was found to be capable of conducting comparative measurements of the elastic parameters of the phantoms and quantitatively mapping elasticity onto ultrasound B-Mode images. This suggests the system has great potential for not only benefiting individuals with tendinopathy with an earlier detection, intervention and better rehabilitation, but also for providing a medical tool for quantification of musculoskeletal tissue dysfunction in other regions of the body such as the shoulder, elbow and knee.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides a brief but comprehensive guide to creating, preparing and dissecting a 'virtual' fossil, using a worked example to demonstrate some standard data processing techniques. Computed tomography (CT) is a 3D imaging modality for producing 'virtual' models of an object on a computer. In the last decade, CT technology has greatly improved, allowing bigger and denser objects to be scanned increasingly rapidly. The technique has now reached a stage where systems can facilitate large-scale, non-destructive comparative studies of extinct fossils and their living relatives. Consequently the main limiting factor in CT-based analyses is no longer scanning, but the hurdles of data processing (see disclaimer). The latter comprises the techniques required to convert a 3D CT volume (stack of digital slices) into a virtual image of the fossil that can be prepared (separated) from the matrix and 'dissected' into its anatomical parts. This technique can be applied to specimens or part of specimens embedded in the rock matrix that until now have been otherwise impossible to visualise. This paper presents a suggested workflow explaining the steps required, using as example a fossil tooth of Sphenacanthus hybodoides (Egerton), a shark from the Late Carboniferous of England. The original NHMUK copyrighted CT slice stack can be downloaded for practice of the described techniques, which include segmentation, rendering, movie animation, stereo-anaglyphy, data storage and dissemination. Fragile, rare specimens and type materials in university and museum collections can therefore be virtually processed for a variety of purposes, including virtual loans, website illustrations, publications and digital collections. Micro-CT and other 3D imaging techniques are increasingly utilized to facilitate data sharing among scientists and on education and outreach projects. Hence there is the potential to usher in a new era of global scientific collaboration and public communication using specimens in museum collections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES]El proyecto contiene módulos de simulación, procesado de datos, mapeo y localización, desarrollados en C++ utilizando ROS (Robot Operating System) y PCL (Point Cloud Library). Ha sido desarrollado bajo el proyecto de robótica submarina AVORA.Se han caracterizado el vehículo y el sensor, y se han analizado diferentes tecnologías de sensores y mapeo. Los datos pasan por tres etapas: Conversión a nube de puntos, filtrado por umbral, eliminación de puntos espureos y, opcionalmente, detección de formas. Estos datos son utilizados para construir un mapa de superficie multinivel. La otra herramienta desarrollada es un algoritmo de Punto más Cercano Iterativo (ICP) modificado, que tiene en cuenta el modo de funcionamiento del sonar de imagen utilizado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.