993 resultados para Ticking Time Bomb Scenario
Resumo:
Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.
Resumo:
Comprehensive published radiocarbon data from selected atmospheric records, tree rings, and recent organic matter were analyzed and grouped into 4 different zones (three for the Northern Hemisphere and one for the whole Southern Hemisphere). These C-14 data for the summer season of each hemisphere were employed to construct zonal, hemispheric, and global data sets for use in regional and global carbon model calculations including calibrating and comparing carbon cycle models. In addition, extended monthly atmospheric C-14 data sets for 4 different zones were compiled for age calibration purposes. This is the first time these data sets were constructed to facilitate the dating of recent organic material using the bomb C-14 curves. The distribution of bomb C-14 reflects the major zones of atmospheric circulation.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
Liposomes have been imaged using a plethora of techniques. However, few of these methods offer the ability to study these systems in their natural hydrated state without the requirement of drying, staining, and fixation of the vesicles. However, the ability to image a liposome in its hydrated state is the ideal scenario for visualization of these dynamic lipid structures and environmental scanning electron microscopy (ESEM), with its ability to image wet systems without prior sample preparation, offers potential advantages to the above methods. In our studies, we have used ESEM to not only investigate the morphology of liposomes and niosomes but also to dynamically follow the changes in structure of lipid films and liposome suspensions as water condenses on to or evaporates from the sample. In particular, changes in liposome morphology were studied using ESEM in real time to investigate the resistance of liposomes to coalescence during dehydration thereby providing an alternative assay of liposome formulation and stability. Based on this protocol, we have also studied niosome-based systems and cationic liposome/DNA complexes. Copyright © Informa Healthcare.
Resumo:
One of the simplest ways to create nonlinear oscillations is the Hopf bifurcation. The spatiotemporal dynamics observed in an extended medium with diffusion (e.g., a chemical reaction) undergoing this bifurcation is governed by the complex Ginzburg-Landau equation, one of the best-studied generic models for pattern formation, where besides uniform oscillations, spiral waves, coherent structures and turbulence are found. The presence of time delay terms in this equation changes the pattern formation scenario, and different kind of travelling waves have been reported. In particular, we study the complex Ginzburg-Landau equation that contains local and global time-delay feedback terms. We focus our attention on plane wave solutions in this model. The first novel result is the derivation of the plane wave solution in the presence of time-delay feedback with global and local contributions. The second and more important result of this study consists of a linear stability analysis of plane waves in that model. Evaluation of the eigenvalue equation does not show stabilisation of plane waves for the parameters studied. We discuss these results and compare to results of other models.
Resumo:
The aims of this thesis were to investigate the neuropsychological, neurophysiological, and cognitive contributors to mobility changes with increasing age. In a series of studies with adults aged 45-88 years, unsafe pedestrian behaviour and falls were investigated in relation to i) cognitive functions (including response time variability, executive function, and visual attention tests), ii) mobility assessments (including gait and balance and using motion capture cameras), iii) motor initiation and pedestrian road crossing behavior (using a simulated pedestrian road scene), iv) neuronal and functional brain changes (using a computer based crossing task with magnetoencephalography), and v) quality of life questionnaires (including fear of falling and restricted range of travel). Older adults are more likely to be fatally injured at the far-side of the road compared to the near-side of the road, however, the underlying mobility and cognitive processes related to lane-specific (i.e. near-side or far-side) pedestrian crossing errors in older adults is currently unknown. The first study explored cognitive, motor initiation, and mobility predictors of unsafe pedestrian crossing behaviours. The purpose of the first study (Chapter 2) was to determine whether collisions at the near-side and far-side would be differentially predicted by mobility indices (such as walking speed and postural sway), motor initiation, and cognitive function (including spatial planning, visual attention, and within participant variability) with increasing age. The results suggest that near-side unsafe pedestrian crossing errors are related to processing speed, whereas far-side errors are related to spatial planning difficulties. Both near-side and far-side crossing errors were related to walking speed and motor initiation measures (specifically motor initiation variability). The salient mobility predictors of unsafe pedestrian crossings determined in the above study were examined in Chapter 3 in conjunction with the presence of a history of falls. The purpose of this study was to determine the extent to which walking speed (indicated as a salient predictor of unsafe crossings and start-up delay in Chapter 2), and previous falls can be predicted and explained by age-related changes in mobility and cognitive function changes (specifically within participant variability and spatial ability). 53.2% of walking speed variance was found to be predicted by self-rated mobility score, sit-to-stand time, motor initiation, and within participant variability. Although a significant model was not found to predict fall history variance, postural sway and attentional set shifting ability was found to be strongly related to the occurrence of falls within the last year. Next in Chapter 4, unsafe pedestrian crossing behaviour and pedestrian predictors (both mobility and cognitive measures) from Chapter 2 were explored in terms of increasing hemispheric laterality of attentional functions and inter-hemispheric oscillatory beta power changes associated with increasing age. Elevated beta (15-35 Hz) power in the motor cortex prior to movement, and reduced beta power post-movement has been linked to age-related changes in mobility. In addition, increasing recruitment of both hemispheres has been shown to occur and be beneficial to perform similarly to younger adults in cognitive tasks (Cabeza, Anderson, Locantore, & McIntosh, 2002). It has been hypothesised that changes in hemispheric neural beta power may explain the presence of more pedestrian errors at the farside of the road in older adults. The purpose of the study was to determine whether changes in age-related cortical oscillatory beta power and hemispheric laterality are linked to unsafe pedestrian behaviour in older adults. Results indicated that pedestrian errors at the near-side are linked to hemispheric bilateralisation, and neural overcompensation post-movement, 4 whereas far-side unsafe errors are linked to not employing neural compensation methods (hemispheric bilateralisation). Finally, in Chapter 5, fear of falling, life space mobility, and quality of life in old age were examined to determine their relationships with cognition, mobility (including fall history and pedestrian behaviour), and motor initiation. In addition to death and injury, mobility decline (such as pedestrian errors in Chapter 2, and falls in Chapter 3) and cognition can negatively affect quality of life and result in activity avoidance. Further, number of falls in Chapter 3 was not significantly linked to mobility and cognition alone, and may be further explained by a fear of falling. The objective of the above study (Study 2, Chapter 3) was to determine the role of mobility and cognition on fear of falling and life space mobility, and the impact on quality of life measures. Results indicated that missing safe pedestrian crossing gaps (potentially indicating crossing anxiety) and mobility decline were consistent predictors of fear of falling, reduced life space mobility, and quality of life variance. Social community (total number of close family and friends) was also linked to life space mobility and quality of life. Lower cognitive functions (particularly processing speed and reaction time) were found to predict variance in fear of falling and quality of life in old age. Overall, the findings indicated that mobility decline (particularly walking speed or walking difficulty), processing speed, and intra-individual variability in attention (including motor initiation variability) are salient predictors of participant safety (mainly pedestrian crossing errors) and wellbeing with increasing age. More research is required to produce a significant model to explain the number of falls.
Resumo:
The aim of the case study is to express the delayed repair time impact on the revenues and profit in numbers with the example of the outage of power plant units. Main steps of risk assessment: • creating project plan suitable for risk assessment • identification of the risk factors for each project activities • scenario-analysis based evaluation of risk factors • selection of the critical risk factors based on the results of quantitative risk analysis • formulating risk response actions for the critical risks • running Monte-Carlo simulation [1] using the results of scenario-analysis • building up a macro which creates the connection among the results of the risk assessment, the production plan and the business plan.
Resumo:
Climate change in the Arctic is predicted to increase plant productivity through decomposition-related enhanced nutrient availability. However, the extent of the increase will depend on whether the increased nutrient availability can be sustained. To address this uncertainty, I assessed the response of plant tissue nutrients, litter decomposition rates, and soil nutrient availability to experimental climate warming manipulations, extended growing season and soil warming, over a 7 year period. Overall, the most consistent effect was the year-to-year variability in measured parameters, probably a result of large differences in weather and time of snowmelt. The results of this study emphasize that although plants of arctic environments are specifically adapted to low nutrient availability, they also posses a suite of traits that help to reduce nutrient losses such as slow growth, low tissue concentrations, and low tissue turnover that result in subtle responses to environmental changes.
Resumo:
Improving the representation of the hydrological cycle in Atmospheric General Circulation Models (AGCMs) is one of the main challenges in modeling the Earth's climate system. One way to evaluate model performance is to simulate the transport of water isotopes. Among those available, tritium (HTO) is an extremely valuable tracer, because its content in the different reservoirs involved in the water cycle (stratosphere, troposphere, ocean) varies by order of magnitude. Previous work incorporated natural tritium into LMDZ-iso, a version of the LMDZ general circulation model enhanced by water isotope diagnostics. Here for the first time, the anthropogenic tritium injected by each of the atmospheric nuclear-bomb tests between 1945 and 1980 has been first estimated and further implemented in the model; it creates an opportunity to evaluate certain aspects of LDMZ over several decades by following the bomb-tritium transient signal through the hydrological cycle. Simulations of tritium in water vapor and precipitation for the period 1950-2008, with both natural and anthropogenic components, are presented in this study. LMDZ-iso satisfactorily reproduces the general shape of the temporal evolution of tritium. However, LMDZ-iso simulates too high a bomb-tritium peak followed by too strong a decrease of tritium in precipitation. The too diffusive vertical advection in AGCMs crucially affects the residence time of tritium in the stratosphere. This insight into model performance demonstrates that the implementation of tritium in an AGCM provides a new and valuable test of the modeled atmospheric transport, complementing water stable isotope modeling.
Resumo:
This deliverable outlines the implementation plan for each of the first-round studies of the RAGE pilots. The main goal of these pilots is to perform a small-scale test of the RAGE games with end-users and intermediary stakeholders in five different non-leisure domains to guide the further development of the games for the final validation studies. At the same time the pilots implement the pre-testing of the research instruments and methodology for answering the main evaluation questions in the five areas of investigation identified in D8.1: 1) usability, 2) game experience, 3) learning effectiveness, 4) transfer effect and 5) pedagogical costs and benefits. Finally, the pilots are aimed at collecting preliminary results for a first formative evaluation of the games and game technologies, with the goal of feeding back useful information to development for the final versions of games and assets. The results of the first pilot will be compared with the results of the final evaluation studies to demonstrate improvements of the game and game effects from first to final version. A revision of the deliverable will be done in the next few months to produce the final arrangement document (D5.1, due at M21).
Resumo:
In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.
Resumo:
The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
Nowadays the production of increasingly complex and electrified vehicles requires the implementation of new control and monitoring systems. This reason, together with the tendency of moving rapidly from the test bench to the vehicle, leads to a landscape that requires the development of embedded hardware and software to face the application effectively and efficiently. The development of application-based software on real-time/FPGA hardware could be a good answer for these challenges: FPGA grants parallel low-level and high-speed calculation/timing, while the Real-Time processor can handle high-level calculation layers, logging and communication functions with determinism. Thanks to the software flexibility and small dimensions, these architectures can find a perfect collocation as engine RCP (Rapid Control Prototyping) units and as smart data logger/analyser, both for test bench and on vehicle application. Efforts have been done for building a base architecture with common functionalities capable of easily hosting application-specific control code. Several case studies originating in this scenario will be shown; dedicated solutions for protype applications have been developed exploiting a real-time/FPGA architecture as ECU (Engine Control Unit) and custom RCP functionalities, such as water injection and testing hydraulic brake control.