936 resultados para Thread safe parallel run-time
Resumo:
Dissertação de Mestrado, Tecnologia de Alimentos, Instituto Superior de Engenharia, Universidade do Algarve, 2016
Resumo:
How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
Advances in the diagnosis of Mycobacterium bovis infection in wildlife hosts may benefit the development of sustainable approaches to the management of bovine tuberculosis in cattle. In the present study, three laboratories from two different countries participated in a validation trial to evaluate the reliability and reproducibility of a real time PCR assay in the detection and quantification of M. bovis from environmental samples. The sample panels consisted of negative badger faeces spiked with a dilution series of M. bovis BCG Pasteur and of field samples of faeces from badgers of unknown infection status taken from badger latrines in areas with high and low incidence of bovine TB (bTB) in cattle. Samples were tested with a previously optimised methodology. The experimental design involved rigorous testing which highlighted a number of potential pitfalls in the analysis of environmental samples using real time PCR. Despite minor variation between operators and laboratories, the validation study demonstrated good concordance between the three laboratories: on the spiked panels, the test showed high levels of agreement in terms of positive/negative detection, with high specificity (100%) and high sensitivity (97%) at levels of 10(5) cells g(-1) and above. Quantitative analysis of the data revealed low variability in recovery of BCG cells between laboratories and operators. On the field samples, the test showed high reproducibility both in terms of positive/negative detection and in the number of cells detected, despite low numbers of samples identified as positive by any laboratory. Use of a parallel PCR inhibition control assay revealed negligible PCR-interfering chemicals co-extracted with the DNA. This is the first example of a multi-laboratory validation of a real time PCR assay for the detection of mycobacteria in environmental samples. Field studies are now required to determine how best to apply the assay for population-level bTB surveillance in wildlife.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
BACKGROUND: Errors in the decision-making process are probably the main threat to patient safety in the prehospital setting. The reason can be the change of focus in prehospital care from the traditional "scoop and run" practice to a more complex assessment and this new focus imposes real demands on clinical judgment. The use of Clinical Guidelines (CG) is a common strategy for cognitively supporting the prehospital providers. However, there are studies that suggest that the compliance with CG in some cases is low in the prehospital setting. One possible way to increase compliance with guidelines could be to introduce guidelines in a Computerized Decision Support System (CDSS). There is limited evidence relating to the effect of CDSS in a prehospital setting. The present study aimed to evaluate the effect of CDSS on compliance with the basic assessment process described in the prehospital CG and the effect of On Scene Time (OST). METHODS: In this time-series study, data from prehospital medical records were collected on a weekly basis during the study period. Medical records were rated with the guidance of a rating protocol and data on OST were collected. The difference between baseline and the intervention period was assessed by a segmented regression. RESULTS: In this study, 371 patients were included. Compliance with the assessment process described in the prehospital CG was stable during the baseline period. Following the introduction of the CDSS, compliance rose significantly. The post-intervention slope was stable. The CDSS had no significant effect on OST. CONCLUSIONS: The use of CDSS in prehospital care has the ability to increase compliance with the assessment process of patients with a medical emergency. This study was unable to demonstrate any effects of OST.
Resumo:
This thesis tries to further our understanding for why some countries today are more prosperous than others. It establishes that part of today's observed variation in several proxies such as income or gender inequality have been determined in the distant past. Chapter one shows that 450 years of (Catholic) Portuguese colonisation had a long-lasting impact in India when it comes to education and female emancipation. Furthermore I use a historical quasi-experiment that happened 250 years ago in order to show that different outcomes have different degrees of persitence over time. Educational gaps between males and females seemingly wash out a few decades after the public provision of schools. The male biased sex-ratios on the other hand stay virtually unchanged despite governmental efforts. This provides evidence that deep rooted son preferences are much harder to overcome, suggesting that a differential approach is needed to tackle sex-selective abortion and female neglect. The second chapter proposes improvements for the execution of Spatial Regression Discontinuity Designs. These suggestions are accompanied by a full-fledged spatial statistical package written in R. Chapter three introduces a quantitative economic geography model in order to study the peculiar evolution of the European urban system on its way to the Industrial Revolution. It can explain the shift of economic gravity from the Mediterranean towards the North-Sea ("little divergence"). The framework provides novel insights on the importance of agricultural trade costs and the peculiar geography of Europe with its extended coastline and dense network of navigable rivers.
Resumo:
This thesis focuses on the dynamics of underactuated cable-driven parallel robots (UACDPRs), including various aspects of robotic theory and practice, such as workspace computation, parameter identification, and trajectory planning. After a brief introduction to CDPRs, UACDPR kinematic and dynamic models are analyzed, under the relevant assumption of inextensible cables. The free oscillatory motion of the end-effector (EE), which is a unique feature of underactuated mechanisms, is studied in detail, from both a kinematic and a dynamic perspective. The free (small) oscillations of the EE around equilibria are proved to be harmonic and the corresponding natural oscillation frequencies are analytically computed. UACDPR workspace computation and analysis are then performed. A new performance index is proposed for the analysis of the influence of actuator errors on cable tensions around equilibrium configurations, and a new type of workspace, called tension-error-insensitive, is defined as the set of poses that a UACDPR EE can statically attain even in presence of actuation errors, while preserving tensions between assigned (positive) bounds. EE free oscillations are then employed to conceive a novel procedure aimed at identifying the EE inertial parameters. This approach does not require the use of force or torque measurements. Moreover, a self-calibration procedure for the experimental determination of UACDPR initial cable lengths is developed, which consequently enables the robot to automatically infer the EE initial pose at machine start-up. Lastly, trajectory planning of UACDPRs is investigated. Two alternative methods are proposed, which aim at (i) reducing EE oscillations even when model parameters are uncertain or (ii) eliminate EE oscillations in case model parameters are perfectly known. EE oscillations are reduced in real-time by dynamically scaling a nominal trajectory and filtering it with an input shaper, whereas they can be eliminated if an off-line trajectory is computed that accounts for the system internal dynamics.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.
Resumo:
Safe collaboration between a robot and human operator forms a critical requirement for deploying a robotic system into a manufacturing and testing environment. In this dissertation, the safety requirement for is developed and implemented for the navigation system of the mobile manipulators. A methodology for human-robot co-existence through a 3d scene analysis is also investigated. The proposed approach exploits the advance in computing capability by relying on graphic processing units (GPU’s) for volumetric predictive human-robot contact checking. Apart from guaranteeing safety of operators, human-robot collaboration is also fundamental when cooperative activities are required, as in appliance test automation floor. To achieve this, a generalized hierarchical task controller scheme for collision avoidance is developed. This allows the robotic arm to safely approach and inspect the interior of the appliance without collision during the testing procedure. The unpredictable presence of the operators also forms dynamic obstacle that changes very fast, thereby requiring a quick reaction from the robot side. In this aspect, a GPU-accelarated distance field is computed to speed up reaction time to avoid collision between human operator and the robot. An automated appliance testing also involves robotized laundry loading and unloading during life cycle testing. This task involves Laundry detection, grasp pose estimation and manipulation in a container, inside the drum and during recovery grasping. A wrinkle and blob detection algorithms for grasp pose estimation are developed and grasp poses are calculated along the wrinkle and blobs to efficiently perform grasping task. By ranking the estimated laundry grasp poses according to a predefined cost function, the robotic arm attempt to grasp poses that are more comfortable from the robot kinematic side as well as collision free on the appliance side. This is achieved through appliance detection and full-model registration and collision free trajectory execution using online collision avoidance.
Resumo:
This paper investigates the use of iPads in the assessment of predominantly second year Bachelor of Education (Primary/Early Childhood) pre-service teachers undertaking a physical education and health unit. Within this unit, practical assessment tasks are graded by tutors in a variety of indoor and outdoor settings. The main barriers for the lecturer or tutor for effective assessment in these contexts include limited time to assess and the provision of explicit feedback for large numbers of students, complex assessment procedures, overwhelming record-keeping and assessing students without distracting from the performance being presented. The purpose of this pilot study was to investigate whether incorporating mobile technologies such as iPads to access online rubrics within the Blackboard environment would enhance and simplify the assessment process. Results from the findings indicate that using iPads to access online rubrics was successful in streamlining the assessment process because it provided pre-service teachers with immediate and explicit feedback. In addition, tutors experienced a reduction in the amount of time required for the same workload by allowing quicker forms of feedback via the iPad dictation function. These outcomes have future implications and potential for mobile paperless assessment in other disciplines such as health, environmental science and engineering.
Resumo:
The dynamics and geometry of the material inflowing and outflowing close to the supermassive black hole in active galactic nuclei are still uncertain. X-rays are the most suitable way to study the AGN innermost regions because of the Fe Kα emission line, a proxy of accretion, and Fe absorption lines produced by outflows. Winds are typically classified as Warm Absorbers (slow and mildly ionized) and Ultra Fast Outflows (fast and highly ionized). Transient Obscurers -optically thick winds that produce strong spectral hardening in X-rays, lasting from days to months- have been observed recently. Emission and absorption features vary on time-scales from hours to years, probing phenomena at different distances from the SMBH. In this work, we use time-resolved spectral analysis to investigate the accretion and ejection flows, to characterize them individually and search for correlations. We analyzed XMM-Newtomn data of a set of the brightest Seyfert 1 galaxies that went through an obscuration event: NGC 3783, NGC 3227, NGC 5548, and NGC 985. Our aim is to search for emission/absorption lines in short-duration spectra (∼ 10ks), to explore regions as close as the SMBH as the statistics allows for, and possibly catch transient phenomena. First we run a blind search to detect emission/absorption features, then we analyze their evolution with Residual Maps: we visualize simultaneously positive and negative residuals from the continuum in the time-energy plane, looking for patterns and relative time-scales. In NGC 3783 we were able to ascribe variations of the Fe Kα emission line to absorptions at the same energy due to clumps in the obscurer, whose presence is detected at >3σ, and to determine the size of the clumps. In NGC 3227 we detected a wind at ∼ 0.2c at ∼ 2σ, briefly appearing during an obscuration event.
Resumo:
The Short Baseline Neutrino Program at Fermilab aims to confirm or definitely rule out the existence of sterile neutrinos at the eV mass scale. The program will perform the most sensitive search in both the nue appearance and numu disappearance channels along the Booster Neutrino Beamline. The far detector, ICARUS-T600, is a high-granularity Liquid Argon Time Projection Chamber located at 600 m from the Booster neutrino source and at shallow depth, thus exposed to a large flux of cosmic particles. Additionally, ICARUS is located 6 degrees off axis with respect to the Neutrino beam from the Main Injector. This thesis presents the construction, installation and commissioning of the ICARUS Cosmic Ray Tagger system, providing a 4 pi coverage of the active liquid argon volume. By exploiting only the precise nanosecond scale synchronization of the cosmic tagger and the PMT optical flashes it is possible to determine if an event was likely triggered by a cosmic particle. The results show that using the Top Cosmic Ray Tagger alone a conservative rejection larger than 65% of the cosmic induced background can be achieved. Additionally, by requiring the absence of hits in the whole cosmic tagger system it is possible to perform a pre-selection of contained neutrino events ahead of the full event reconstruction.
Resumo:
Rail transportation has significant importance in the future world. This importance is tightly bounded to accessible, sustainable, efficient and safe railway systems. Precise positioning in railway applications is essential for increasing railway traffic, train-track control, collision avoidance, train management and autonomous train driving. Hence, precise train positioning is a safety-critical application. Nowadays, positioning in railway applications highly depends on a cellular-based system called GSM-R, a railway-specific version of Global System for Mobile Communications (GSM). However, GSM-R is a relatively outdated technology and does not provide enough capacity and precision demanded by future railway networks. One option for positioning is mounting Global Navigation Satellite System (GNSS) receivers on trains as a low-cost solution. Nevertheless, GNSS can not provide continuous service due to signal interruption by harsh environments, tunnels etc. Another option is exploiting cellular-based positioning methods. The most recent cellular technology, 5G, provides high network capacity, low latency, high accuracy and high availability suitable for train positioning. In this thesis, an approach to 5G-based positioning for railway systems is discussed and simulated. Observed Time Difference of Arrival (OTDOA) method and 5G Positioning Reference Signal (PRS) are used. Simulations run using MATLAB, based on existing code developed for 5G positioning by extending it for Non Line of Sight (NLOS) link detection and base station exclusion algorithms. Performance analysis for different configurations is completed. Results show that efficient NLOS detection improves positioning accuracy and implementing a base station exclusion algorithm helps for further increase.
Resumo:
In recent years, energy modernization has focused on smart engineering advancements. This entails designing complicated software and hardware for variable-voltage digital substations. A digital substation consists of electrical and auxiliary devices, control and monitoring devices, computers, and control software. Intelligent measurement systems use digital instrument transformers and IEC 61850-compliant information exchange protocols in digital substations. Digital instrument transformers used for real-time high-voltage measurements should combine advanced digital, measuring, information, and communication technologies. Digital instrument transformers should be cheap, small, light, and fire- and explosion-safe. These smaller and lighter transformers allow long-distance transmission of an optical signal that gauges direct or alternating current. Cost-prohibitive optical converters are a problem. To improve the tool's accuracy, amorphous alloys are used in the magnetic circuits and compensating feedback. Large-scale voltage converters can be made cheaper by using resistive, capacitive, or hybrid voltage dividers. In known electronic voltage transformers, the voltage divider output is generally on the low-voltage side, facilitating power supply organization. Combining current and voltage transformers reduces equipment size, installation, and maintenance costs. These two gadgets cost less together than individually. To increase commercial power metering accuracy, current and voltage converters should be included into digital instrument transformers so that simultaneous analogue-to-digital samples are obtained. Multichannel ADC microcircuits with synchronous conversion start allow natural parallel sample drawing. Digital instrument transformers are created adaptable to substation operating circumstances and environmental variables, especially ambient temperature. An embedded microprocessor auto-diagnoses and auto-calibrates the proposed digital instrument transformer.