77 resultados para dynamic time warping (DTW)
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
The balance between player competence and the challenge presented by a task has been acknowledged as a major factor in providing optimal experience in video games. While Dynamic Difficulty Adjustment (DDA) presents methods for adjusting difficulty in real-time during singleplayer games, little research has explored its application in competitive multiplayer games where challenge is dictated by the competence of human opponents. By conducting a formal review of 180 existing competitive multiplayer games, it was found that a large number of modern games are utilizing DDA techniques to balance challenge between human opponents. From this data, we propose a preliminary framework for classifying Multiplayer Dynamic Difficulty Adjustment (mDDA) instances.
Resumo:
Purpose: PTK787/ZK 222584 (PTK/ZK), an orally active inhibitor of vascular endothelial growth factor (VEGF) receptor tyrosine kinases, inhibits VEGF-mediated angiogenesis. The pharmacodynamic effects of PTK/ZK were evaluated by assessing changes in contrast-enhancement parameters of metastatic liver lesions using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in patients with advanced colorectal cancer treated in two ongoing, dose-escalating phase I studies. Patients and Methods: Twenty-six patients had DCE-MRI performed at baseline, day 2, and at the end of each 28-day cycle. Doses of oral PTK/ZK ranged from 50 to 2000 mg once daily. Tumor permeability and vascularity were assessed by calculating the bidirectional transfer constant (Ki). The percentage of baseline Ki (% of baseline Ki) at each time point was compared with pharmacokinetic and clinical end points. Results: A significant negative correlation exists between the % of baseline Ki and increase in PTK/ZK oral dose and plasma levels (P = .01 for oral dose; P = .0001 for area under the plasma concentration curve at day 2). Patients with a best response of stable disease had a significantly greater reduction in Ki at both day 2 and at the end of cycle 1 compared with progressors (mean difference in % of baseline Ki, 47%, P = .004%; and 51%, P = .006; respectively). The difference in % of baseline Ki remained statistically significant after adjusting for baseline WHO performance status. Conclusion: These findings should help to define a biologically active dose of PTK/ZK. These results suggest that DCE-MRI may be a useful biomarker for defining the pharmacological response and dose of angiogenesis inhibitiors, such as PTK/ZK, for further clinical development. © 2003 by American Society of Clinical Oncology.
Resumo:
Background: Diabetic peripheral neuropathy is an important cause of foot ulceration and limb loss. This systematic review and meta-analysis investigated the effect of diabetic peripheral neuropathy on gait, dynamic electromyography and dynamic plantar pressures. Methods: Electronic databases were searched systematically for articles reporting the effect of diabetic peripheral neuropathy on gait, dynamic electromyography and plantar pressures. Searches were restricted to articles published between January 2000 and April 2012. Outcome measures assessed included spatiotemporal parameters, lower limb kinematics, kinetics, muscle activation and plantar pressure. Meta-analyses were carried out on all outcome measures reported by ≥3 studies. Findings: Sixteen studies were included consisting of 382 neuropathy participants, 216 diabetes controls without neuropathy and 207 healthy controls. Meta-analysis was performed on 11 gait variables. A high level of heterogeneity was noted between studies. Meta-analysis results suggested a longer stance time and moderately higher plantar pressures in diabetic peripheral neuropathy patients at the rearfoot, midfoot and forefoot compared to controls. Systematic review of studies suggested potential differences in the biomechanical characteristics (kinematics, kinetics, EMG) of diabetic neuropathy patients. However these findings were inconsistent and limited by small sample sizes.; Interpretation: Current evidence suggests that patients with diabetic peripheral neuropathy have elevated plantar pressures and occupy a longer duration of time in the stance-phase during gait. Firm conclusions are hampered by the heterogeneity and small sample sizes of available studies. Interpretation: Current evidence suggests that patients with diabetic peripheral neuropathy have elevated plantar pressures and occupy a longer duration of time in the stance-phase during gait. Firm conclusions are hampered by the heterogeneity and small sample sizes of available studies.
Resumo:
Lean strategies have been developed to eliminate or reduce waste and thus improve operational efficiency in a manufacturing environment. However, in practice, manufacturers encounter difficulties to select appropriate lean strategies within their resource constraints and to quantitatively evaluate the perceived value of manufacturing waste reduction. This paper presents a methodology developed to quantitatively evaluate the contribution of lean strategies selected to reduce manufacturing wastes within the manufacturers’ resource (time) constraints. A mathematical model has been developed for evaluating the perceived value of lean strategies to manufacturing waste reduction and a step-by-step methodology is provided for selecting appropriate lean strategies to improve the manufacturing performance within their resource constraints. A computer program is developed in MATLAB for finding the optimum solution. With the help of a case study, the proposed methodology and developed model has been validated. A ‘lean strategy-wastes’ correlation matrix has been proposed to establish the relationship between the manufacturing wastes and lean strategies. Using the correlation matrix and applying the proposed methodology and developed mathematical model, authors came out with optimised perceived value of reduction of a manufacturer's wastes by implementing appropriate lean strategies within a manufacturer's resources constraints. Results also demonstrate that the perceived value of reduction of manufacturing wastes can significantly be changed based on policies and product strategy taken by a manufacturer. The proposed methodology can also be used in dynamic situations by changing the input in the programme developed in MATLAB. By identifying appropriate lean strategies for specific manufacturing wastes, a manufacturer can better prioritise implementation efforts and resources to maximise the success of implementing lean strategies in their organisation.
Resumo:
The Bluetooth technology is being increasingly used, among the Automated Vehicle Identification Systems, to retrieve important information about urban networks. Because the movement of Bluetooth-equipped vehicles can be monitored, throughout the network of Bluetooth sensors, this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. Some of the main challenges inherent to Bluetooth data are, first, that Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be estimated. Second, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold: to give an overview of the issues inherent to the Bluetooth technology, through the analysis of the data available from the Bluetooth sensors in Brisbane; and to propose a method for retrieving the itineraries of the individual Bluetooth vehicles. We argue that estimating these latent itineraries, accurately, is a crucial step toward the retrieval of accurate dynamic Origin Destination Matrices.
Resumo:
Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
BACKGROUND: Dengue fever (DF) is one of the most important emerging arboviral human diseases. Globally, DF incidence has increased by 30-fold over the last fifty years, and the geographic range of the virus and its vectors has expanded. The disease is now endemic in more than 120 countries in tropical and subtropical parts of the world. This study examines the spatiotemporal trends of DF transmission in the Asia-Pacific region over a 50-year period, and identified the disease's cluster areas. METHODOLOGY AND FINDINGS: The World Health Organization's DengueNet provided the annual number of DF cases in 16 countries in the Asia-Pacific region for the period 1955 to 2004. This fifty-year dataset was divided into five ten-year periods as the basis for the investigation of DF transmission trends. Space-time cluster analyses were conducted using scan statistics to detect the disease clusters. This study shows an increasing trend in the spatiotemporal distribution of DF in the Asia-Pacific region over the study period. Thailand, Vietnam, Laos, Singapore and Malaysia are identified as the most likely clusters (relative risk = 13.02) of DF transmission in this region in the period studied (1995 to 2004). The study also indicates that, for the most part, DF transmission has expanded southwards in the region. CONCLUSIONS: This information will lead to the improvement of DF prevention and control strategies in the Asia-Pacific region by prioritizing control efforts and directing them where they are most needed.
Resumo:
This paper presents a new method to determine feeder reconfiguration scheme considering variable load profile. The objective function consists of system losses, reliability costs and also switching costs. In order to achieve an optimal solution the proposed method compares these costs dynamically and determines when and how it is reasonable to have a switching operation. The proposed method divides a year into several equal time periods, then using particle swarm optimization (PSO), optimal candidate configurations for each period are obtained. System losses and customer interruption cost of each configuration during each period is also calculated. Then, considering switching cost from a configuration to another one, dynamic programming algorithm (DPA) is used to determine the annual reconfiguration scheme. Several test systems were used to validate the proposed method. The obtained results denote that to have an optimum solution it is necessary to compare operation costs dynamically.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
Covertly tracking mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms requires both visual and acoustic stealth. Whilst the use of robots for stealthy surveillance is not new, the majority only consider navigation for visual covertness. However, most fielded robotic systems have a non-negligible acoustic footprint arising from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. This time-varying acoustic signature can jeopardise any visual covertness and needs to be addressed in any stealthy navigation strategy. In previous work, we addressed the initial concepts for acoustically masking a tracking robot’s movements as it travels between observation locations selected to minimise its detectability by a dynamic natural target and ensuring con- tinuous visual tracking of the target. This work extends the overall concept by examining the utility of real-time acoustic signature self-assessment and exploiting shadows as hiding locations for use in a combined visual and acoustic stealth framework.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
This article deals with time-domain hydroelastic analysis of a marine structure. The convolution terms associated with fluid memory effects are replaced by an alternative state-space representation, the parameters of which are obtained by using realization theory. The mathematical model established is validated by comparison to experimental results of a very flexible barge. Two types of time-domain simulations are performed: dynamic response of the initially inert structure to incident regular waves and transient response of the structure after it is released from a displaced condition in still water. The accuracy and the efficiency of the simulations based on the state-space model representations are compared to those that integrate the convolutions.