467 resultados para Prediction techniques
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
This paper proposes a practical prediction procedure for vertical displacement of a Rotarywing Unmanned Aerial Vehicle (RUAV) landing deck in the presence of stochastic sea state disturbances. A proper time series model tending to capture characteristics of the dynamic relationship between an observer and a landing deck is constructed, with model orders determined by a novel principle based on Bayes Information Criterion (BIC) and coefficients identified using the Forgetting Factor Recursive Least Square (FFRLS) method. In addition, a fast-converging online multi-step predictor is developed, which can be implemented more rapidly than the Auto-Regressive (AR) predictor as it requires less memory allocations when updating coefficients. Simulation results demonstrate that the proposed prediction approach exhibits satisfactory prediction performance, making it suitable for integration into ship-helicopter approach and landing guidance systems in consideration of computational capacity of the flight computer.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.
Resumo:
Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.
Resumo:
A method for prediction of the radiation pattern of N strongly coupled antennas with mismatched sources is presented. The method facilitates fast and accurate design of compact arrays. The prediction is based on the measured N-port S parameters of the coupled antennas and the N active element patterns measured in a 50 ω environment. By introducing equivalent power sources, the radiation pattern with excitation by sources with arbitrary impedances and various decoupling and matching networks (DMN) can be accurately predicted without the need for additional measurements. Two experiments were carried out for verification: pattern prediction for parasitic antennas with different loads and for antennas with DMN. The difference between measured and predicted patterns was within 1 to 2 dB.
Resumo:
A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.
Resumo:
Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.
Resumo:
Travel time prediction has long been the topic of transportation research. But most relevant prediction models in the literature are limited to motorways. Travel time prediction on arterial networks is challenging due to involving traffic signals and significant variability of individual vehicle travel time. The limited availability of traffic data from arterial networks makes travel time prediction even more challenging. Recently, there has been significant interest of exploiting Bluetooth data for travel time estimation. This research analysed the real travel time data collected by the Brisbane City Council using the Bluetooth technology on arterials. Databases, including experienced average daily travel time are created and classified for approximately 8 months. Thereafter, based on data characteristics, Seasonal Auto Regressive Integrated Moving Average (SARIMA) modelling is applied on the database for short-term travel time prediction. The SARMIA model not only takes the previous continuous lags into account, but also uses the values from the same time of previous days for travel time prediction. This is carried out by defining a seasonality coefficient which improves the accuracy of travel time prediction in linear models. The accuracy, robustness and transferability of the model are evaluated through comparing the real and predicted values on three sites within Brisbane network. The results contain the detailed validation for different prediction horizons (5 min to 90 minutes). The model performance is evaluated mainly on congested periods and compared to the naive technique of considering the historical average.
Resumo:
Nanowires (NWs) have attracted appealing and broad application owing to their remarkable mechanical, optical, electrical, thermal and other properties. To unlock the revolutionary characteristics of NWs, a considerable body of experimental and theoretical work has been conducted. However, due to the extremely small dimensions of NWs, the application and manipulation of the in situ experiments involve inherent complexities and huge challenges. For the same reason, the presence of defects appears as one of the most dominant factors in determining their properties. Hence, based on the experiments' deficiency and the necessity of investigating different defects' influence, the numerical simulation or modelling becomes increasingly important in the area of characterizing the properties of NWs. It has been noted that, despite the number of numerical studies of NWs, significant work still lies ahead in terms of problem formulation, interpretation of results, identification and delineation of deformation mechanisms, and constitutive characterization of behaviour. Therefore, the primary aim of this study was to characterize both perfect and defected metal NWs. Large-scale molecular dynamics (MD) simulations were utilized to assess the mechanical properties and deformation mechanisms of different NWs under diverse loading conditions including tension, compression, bending, vibration and torsion. The target samples include different FCC metal NWs (e.g., Cu, Ag, Au NWs), which were either in a perfect crystal structure or constructed with different defects (e.g. pre-existing surface/internal defects, grain/twin boundaries). It has been found from the tensile deformation that Young's modulus was insensitive to different styles of pre-existing defects, whereas the yield strength showed considerable reduction. The deformation mechanisms were found to be greatly influenced by the presence of defects, i.e., different defects acted in the role of dislocation sources, and many affluent deformation mechanisms had been triggered. Similar conclusions were also obtained from the compressive deformation, i.e., Young's modulus was insensitive to different defects, but the critical stress showed evident reduction. Results from the bending deformation revealed that the current modified beam models with the considerations of surface effect, or both surface effect and axial extension effect were still experiencing certain inaccuracy, especially for the NW with ultra small cross-sectional size. Additionally, the flexural rigidity of the NW was found to be insensitive to different pre-existing defects, while the yield strength showed an evident decrease. For the resonance study, the first-order natural frequency of the NW with pre-existing surface defects was almost the same as that from the perfect NW, whereas a lower first-order natural frequency and a significantly degraded quality factor was observed for NWs with grain boundaries. Most importantly, the <110> FCC NWs were found to exhibit a novel beat phenomenon driven by a single actuation, which was resulted from the asymmetry in the lattice spacing in the (110) plane of the NW cross-section, and expected to exert crucial impacts on the in situ nanomechanical measurements. In particular, <110> Ag NWs with rhombic, truncated rhombic, and triangular cross-sections were found to naturally possess two first-mode natural frequencies, which were envisioned with applications in NEMS that could operate in a non-planar regime. The torsion results revealed that the torsional rigidity of the NW was insensitive to the presence of pre-existing defects and twin boundaries, but received evident reduction due to grain boundaries. Meanwhile, the critical angle decreased considerably for defected NWs. This study has provided a comprehensive and deep investigation on the mechanical properties and deformation mechanisms of perfect and defected NWs, which will greatly extend and enhance the existing knowledge and understanding of the properties/performance of NWs, and eventually benefit the realization of their full potential applications. All delineated MD models and theoretical analysis techniques that were established for the target NWs in this research are also applicable to future studies on other kinds of NWs. It has been suggested that MD simulation is an effective and excellent tool, not only for the characterization of the properties of NWs, but also for the prediction of novel or unexpected properties.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
A people-to-people matching system (or a match-making system) refers to a system in which users join with the objective of meeting other users with the common need. Some real-world examples of these systems are employer-employee (in job search networks), mentor-student (in university social networks), consume-to-consumer (in marketplaces) and male-female (in an online dating network). The network underlying in these systems consists of two groups of users, and the relationships between users need to be captured for developing an efficient match-making system. Most of the existing studies utilize information either about each of the users in isolation or their interaction separately, and develop recommender systems using the one form of information only. It is imperative to understand the linkages among the users in the network and use them in developing a match-making system. This study utilizes several social network analysis methods such as graph theory, small world phenomenon, centrality analysis, density analysis to gain insight into the entities and their relationships present in this network. This paper also proposes a new type of graph called “attributed bipartite graph”. By using these analyses and the proposed type of graph, an efficient hybrid recommender system is developed which generates recommendation for new users as well as shows improvement in accuracy over the baseline methods.
Resumo:
A qualitative analysis of the expected dilatation strain field in the vicinity of an array of grain-boundary (GB) dislocations is presented. The analysis provides a basis for the prediction of the critical current densities (jc) across low-angle YBa2Cu3O7- (YBCO) GBs as a function of their energy. The introduction of the GB energy allows the extension of the analysis to high-angle GBs using established models which predict the GB energy as a function of misorientation angle. The results are compared to published data for jc across [001]-tilt YBCO GBs for the full range of misorientations, showing a good fit. Since the GB energy is directly related to the GB structure, the analysis may allow a generalization of the scaling behavior of jc with the GB energy. © 1995 The American Physical Society.
Resumo:
Objectives The purpose of the study was to establish regression equations that could be used to predict muscle thickness and pennation angle at different intensities from electromyography (EMG) based measures of muscle activation during isometric contractions. Design Cross-sectional study. Methods Simultaneous ultrasonography and EMG were used to measure pennation angle, muscle thickness and muscle activity of the rectus femoris and vastus lateralis muscles, respectively, during graded isometric knee extension contractions performed on a Cybex dynamometer. Data form fifteen male soccer players were collected in increments of approximately 25% intensity of the maximum voluntary contraction (MVC) ranging from rest to MVC. Results There was a significant correlation (P < 0.05) between ultrasound predictors and EMG measures for the muscle thickness of rectus femoris with an R2 value of 0.68. There was no significant correlation (P > 0.05) between ultrasound pennation angle for the vastus lateralis predictors for EMG muscle activity with an R2 value of 0.40. Conclusions The regression equations can be used to characterise muscle thickness more accurately and to determine how it changes with contraction intensity, this provides improved estimates of muscle force when using musculoskeletal models.
Resumo:
Lean body mass (LBM) and muscle mass remains difficult to quantify in large epidemiological studies due to non-availability of inexpensive methods. We therefore developed anthropometric prediction equations to estimate the LBM and appendicular lean soft tissue (ALST) using dual energy X-ray absorptiometry (DXA) as a reference method. Healthy volunteers (n= 2220; 36% females; age 18-79 y) representing a wide range of body mass index (14-44 kg/m2) participated in this study. Their LBM including ALST was assessed by DXA along with anthropometric measurements. The sample was divided into prediction (60%) and validation (40%) sets. In the prediction set, a number of prediction models were constructed using DXA measured LBM and ALST estimates as dependent variables and a combination of anthropometric indices as independent variables. These equations were cross-validated in the validation set. Simple equations using age, height and weight explained > 90% variation in the LBM and ALST in both men and women. Additional variables (hip and limb circumferences and sum of SFTs) increased the explained variation by 5-8% in the fully adjusted models predicting LBM and ALST. More complex equations using all the above anthropometric variables could predict the DXA measured LBM and ALST accurately as indicated by low standard error of the estimate (LBM: 1.47 kg and 1.63 kg for men and women, respectively) as well as good agreement by Bland Altman analyses. These equations could be a valuable tool in large epidemiological studies assessing these body compartments in Indians and other population groups with similar body composition.
Resumo:
In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sizedtrucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.