900 resultados para Multi-scheme ensemble prediction system


Relevância:

40.00% 40.00%

Publicador:

Resumo:

As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver¡¯s age, and driver¡¯s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

First-order transitions of system where both lattice site occupancy and lattice spacing fluctuate, such as cluster crystals, cannot be efficiently studied by traditional simulation methods, which necessarily fix one of these two degrees of freedom. The difficulty, however, can be surmounted by the generalized [N]pT ensemble [J. Chem. Phys. 136, 214106 (2012)]. Here we show that histogram reweighting and the [N]pT ensemble can be used to study an isostructural transition between cluster crystals of different occupancy in the generalized exponential model of index 4 (GEM-4). Extending this scheme to finite-size scaling studies also allows us to accurately determine the critical point parameters and to verify that it belongs to the Ising universality class.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We know now from radial velocity surveys and transit space missions thatplanets only a few times more massive than our Earth are frequent aroundsolar-type stars. Fundamental questions about their formation history,physical properties, internal structure, and atmosphere composition are,however, still to be solved. We present here the detection of a systemof four low-mass planets around the bright (V = 5.5) and close-by (6.5pc) star HD 219134. This is the first result of the Rocky Planet Searchprogramme with HARPS-N on the Telescopio Nazionale Galileo in La Palma.The inner planet orbits the star in 3.0935 ± 0.0003 days, on aquasi-circular orbit with a semi-major axis of 0.0382 ± 0.0003AU. Spitzer observations allowed us to detect the transit of the planetin front of the star making HD 219134 b the nearest known transitingplanet to date. From the amplitude of the radial velocity variation(2.25 ± 0.22 ms-1) and observed depth of the transit(359 ± 38 ppm), the planet mass and radius are estimated to be4.36 ± 0.44 M⊕ and 1.606 ± 0.086R⊕, leading to a mean density of 5.76 ± 1.09 gcm-3, suggesting a rocky composition. One additional planetwith minimum-mass of 2.78 ± 0.65 M⊕ moves on aclose-in, quasi-circular orbit with a period of 6.767 ± 0.004days. The third planet in the system has a period of 46.66 ± 0.08days and a minimum-mass of 8.94 ± 1.13 M⊕, at0.233 ± 0.002 AU from the star. Its eccentricity is 0.46 ±0.11. The period of this planet is close to the rotational period of thestar estimated from variations of activity indicators (42.3 ± 0.1days). The planetary origin of the signal is, however, thepreferredsolution as no indication of variation at the corresponding frequency isobserved for activity-sensitive parameters. Finally, a fourth additionallonger-period planet of mass of 71 M⊕ orbits the starin 1842 days, on an eccentric orbit (e = 0.34 ± 0.17) at adistance of 2.56 AU.The photometric time series and radial velocities used in this work areavailable in electronic form at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr(ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A72

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we consider a multiuser downlink wiretap network consisting of one base station (BS) equipped with AA antennas, NB single-antenna legitimate users, and NE single-antenna eavesdroppers over Nakagami-m fading channels. In particular, we introduce a joint secure transmission scheme that adopts transmit antenna selection (TAS) at the BS and explores threshold-based selection diversity (tSD) scheduling over legitimate users to achieve a good secrecy performance while maintaining low implementation complexity. More specifically, in an effort to quantify the secrecy performance of the considered system, two practical scenarios are investigated, i.e., Scenario I: the eavesdropper’s channel state information (CSI) is unavailable at the BS, and Scenario II: the eavesdropper’s CSI is available at the BS. For Scenario I, novel exact closed-form expressions of the secrecy outage probability are derived, which are valid for general networks with an arbitrary number of legitimate users, antenna configurations, number of eavesdroppers, and the switched threshold. For Scenario II, we take into account the ergodic secrecy rate as the principle performance metric, and derive novel closed-form expressions of the exact ergodic secrecy rate. Additionally, we also provide simple and asymptotic expressions for secrecy outage probability and ergodic secrecy rate under two distinct cases, i.e., Case I: the legitimate user is located close to the BS, and Case II: both the legitimate user and eavesdropper are located close to the BS. Our important findings reveal that the secrecy diversity order is AAmA and the slope of secrecy rate is one under Case I, while the secrecy diversity order and the slope of secrecy rate collapse to zero under Case II, where the secrecy performance floor occurs. Finally, when the switched threshold is carefully selected, the considered scheduling scheme outperforms other well known existing schemes in terms of the secrecy performance and complexity tradeoff

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract not available

Relevância:

40.00% 40.00%

Publicador:

Resumo:

By proposing a numerical based method on PCA-ANFIS(Adaptive Neuro-Fuzzy Inference System), this paper is focusing on solving the problem of uncertain cycle of water injection in the oilfield. As the dimension of original data is reduced by PCA, ANFIS can be applied for training and testing the new data proposed by this paper. The correctness of PCA-ANFIS models are verified by the injection statistics data collected from 116 wells inside an oilfield, the average absolute error of testing is 1.80 months. With comparison by non-PCA based models which average error is 4.33 months largely ahead of PCA-ANFIS based models, it shows that the testing accuracy has been greatly enhanced by our approach. With the conclusion of the above testing, the PCA-ANFIS method is robust in predicting the effectiveness cycle of water injection which helps oilfield developers to design the water injection scheme.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this dissertation is to evaluate the potential downstream influence of the Indian Ocean (IO) on El Niño/Southern Oscillation (ENSO) forecasts through the oceanic pathway of the Indonesian Throughflow (ITF), atmospheric teleconnections between the IO and Pacific, and assimilation of IO observations. Also the impact of sea surface salinity (SSS) in the Indo-Pacific region is assessed to try to address known problems with operational coupled model precipitation forecasts. The ITF normally drains warm fresh water from the Pacific reducing the mixed layer depths (MLD). A shallower MLD amplifies large-scale oceanic Kelvin/Rossby waves thus giving ~10% larger response and more realistic ENSO sea surface temperature (SST) variability compared to observed when the ITF is open. In order to isolate the impact of the IO sector atmospheric teleconnections to ENSO, experiments are contrasted that selectively couple/decouple the interannual forcing in the IO. The interannual variability of IO SST forcing is responsible for 3 month lagged widespread downwelling in the Pacific, assisted by off-equatorial curl, leading to warmer NINO3 SST anomaly and improved ENSO validation (significant from 3-9 months). Isolating the impact of observations in the IO sector using regional assimilation identifies large-scale warming in the IO that acts to intensify the easterlies of the Walker circulation and increases pervasive upwelling across the Pacific, cooling the eastern Pacific, and improving ENSO validation (r ~ 0.05, RMS~0.08C). Lastly, the positive impact of more accurate fresh water forcing is demonstrated to address inadequate precipitation forecasts in operational coupled models. Aquarius SSS assimilation improves the mixed layer density and enhances mixing, setting off upwelling that eventually cools the eastern Pacific after 6 months, counteracting the pervasive warming of most coupled models and significantly improving ENSO validation from 5-11 months. In summary, the ITF oceanic pathway, the atmospheric teleconnection, the impact of observations in the IO, and improved Indo-Pacific SSS are all responsible for ENSO forecast improvements, and so each aspect of this study contributes to a better overall understanding of ENSO. Therefore, the upstream influence of the IO should be thought of as integral to the functioning of ENSO phenomenon.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In multi-unit organisations such as a bank and its branches or a national body delivering publicly funded health or education services through local operating units, the need arises to incentivize the units to operate efficiently. In such instances, it is generally accepted that units found to be inefficient can be encouraged to make efficiency savings. However, units which are found to be efficient need to be incentivized in a different manner. It has been suggested that efficient units could be incentivized by some reward compatible with the level to which their attainment exceeds that of the best of the rest, normally referred to as “super-efficiency”. A recent approach to this issue (Varmaz et. al. 2013) has used Data Envelopment Analysis (DEA) models to measure the super-efficiency of the whole system of operating units with and without the involvement of each unit in turn in order to provide incentives. We identify shortcomings in this approach and use it as a starting point to develop a new DEA-based system for incentivizing operating units to operate efficiently for the benefit of the aggregate system of units. Data from a small German retail bank is used to illustrate our method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recommender system is a specific type of intelligent systems, which exploits historical user ratings on items and/or auxiliary information to make recommendations on items to the users. It plays a critical role in a wide range of online shopping, e-commercial services and social networking applications. Collaborative filtering (CF) is the most popular approaches used for recommender systems, but it suffers from complete cold start (CCS) problem where no rating record are available and incomplete cold start (ICS) problem where only a small number of rating records are available for some new items or users in the system. In this paper, we propose two recommendation models to solve the CCS and ICS problems for new items, which are based on a framework of tightly coupled CF approach and deep learning neural network. A specific deep neural network SADE is used to extract the content features of the items. The state of the art CF model, timeSVD++, which models and utilizes temporal dynamics of user preferences and item features, is modified to take the content features into prediction of ratings for cold start items. Extensive experiments on a large Netflix rating dataset of movies are performed, which show that our proposed recommendation models largely outperform the baseline models for rating prediction of cold start items. The two proposed recommendation models are also evaluated and compared on ICS items, and a flexible scheme of model retraining and switching is proposed to deal with the transition of items from cold start to non-cold start status. The experiment results on Netflix movie recommendation show the tight coupling of CF approach and deep learning neural network is feasible and very effective for cold start item recommendation. The design is general and can be applied to many other recommender systems for online shopping and social networking applications. The solution of cold start item problem can largely improve user experience and trust of recommender systems, and effectively promote cold start items.