25 resultados para digital delay-line interpolation
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.
Resumo:
This dissertation explores the role of artillery forward observation teams during the battle of Okinawa (April–June 1945). It addresses a variety of questions associated with this front line artillery support. First, it examines the role of artillery itself in the American victory over the Japanese on Okinawa. Second, it traces the history of the forward observer in the three decades before the end of World War II. Third, it defines the specific role of the forward observation teams during the battle: what they did and how they did it during this three-month duel. Fourth, it deals with the particular problems of the forward observer. These included coordination with the local infantry commander, adjusting to the periodic rotation between the front lines and the artillery battery behind the line of battle, responding to occasional problems with "friendly fire" (American artillery falling on American ground forces), dealing with personnel turnover in the teams (due to death, wounds, and illness), and finally, developing a more informal relationship between officers and enlisted men to accommodate the reality of this recently created combat assignment. Fifth, it explores the experiences of a select group of men who served on (or in proximity to) forward observation teams on Okinawa. Previous scholars and popular historians of the battle have emphasized the role of Marines, infantrymen, and flame-throwing armor. This work offers a different perspective on the battle and it uses new sources as well. A pre-existing archive of interviews with Okinawan campaign forward observer team members conducted in the 1990s forms the core of the oral history component of this research project. The verbal accounts were checked against and supplemented by a review of unit reports obtained from the U.S. National Archives and various secondary sources. The dissertation concludes that an understanding of American artillery observation is critical to a more complete comprehension of the battle of Okinawa. These mid-ranking (and largely middle class) soldiers proved capable of adjusting to the demands of combat conditions. They provide a unique and understudied perspective of the entire battle.
Resumo:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
Just about everyone who ranks cruise lines puts Seabourn first on the list. The readers of Conde Nast Traveler ranked it the world's top cruise line for three consecutive years and fifth in their survey of the top 100 overall travel experiences. Of special interest to hospitality professionals is Seabourn's 98.5 percent score for service- higher than any other vacation experience in the world.
Resumo:
Strategic planning is the key to producing a realistic, attractive rate of growth and a respectable return on investment. The author analyzes the steps in the planning process and looks at the environmental and cultural values which influence the strategic planner in his/her work.
Resumo:
E-commerce is an approach to achieving business goals through information technology and is quickly changing the way hospitality business is planned, monitored, and conducted. No longer do buyers and sellers need to engage in interpersonal communications for transactions to occur. The future of transaction processing, which includes cyber cash and digital checking, are directly attributable to e-commerce which provides and efficient, reliable, secure, and effective platform for conducting hospitality business on the web.
Resumo:
Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. ^ In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment ("relaxation" vs. "stress") are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. ^ For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). ^ In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the "relaxation" vs. "stress" states.^
Resumo:
Climate change is estimated to be responsible for 400,000 deaths per year, mostly because of hunger and communicable diseases affecting children in the Global South. Using the sociology of W.E.B. Du Bois, I attempt to demonstrate how and why climate change occurs along the color line. I conclude by arguing why it is important to think about climate change as a human rights issue.
Resumo:
Hearing of the news of the death of Diana, Princess of Wales, in a traffic accident, is taken as an analogue for being a percipient but uninvolved witness to a crime, or a witness to another person's sudden confession to some illegal act. This event (known in the literature as a “reception event”) has previously been hypothesized to cause one to form a special type of memory commonly known as a “flashbulb memory” (FB) (Brown and Kulik, 1977). FB's are hypothesized to be especially resilient against forgetting, highly detailed including peripheral details, clear, and inspiring great confidence in the individual for their accuracy. FB's are dependent for their formation upon surprise, emotional valence, and impact, or consequentiality to the witness of the initiating event. FB's are thought to be enhanced by frequent rehearsal. FB's are very important in the context of criminal investigation and litigation in that investigators and jurors usually place great store in witnesses, regardless of their actual accuracy, who claim to have a clear and complete recollection of an event, and who express this confidently. Therefore, the lives, or at least the freedom, of criminal defendants, and the fortunes of civil litigants hang on the testimony of witnesses professing to have FB's. ^ In this study, which includes a large and diverse sample (N = 305), participants were surveyed within 2–4 days after hearing of the fatal accident, and again at intervals of 2 and 4 weeks, 6, 12, and 18 months. Contrary to the FB hypothesis, I found that participants' FB's degraded over time beginning at least as early as two weeks post event. At about 12 months the memory trace stabilized, resisting further degradation. Repeated interviewing did not have any negative affect upon accuracy, contrary to concerns in the literature. Analysis by correlation and regression indicated no effect or predictive power for participant age, emotionality, confidence, or student status, as related to accuracy of recall; nor was participant confidence in accuracy predicted by emotional impact as hypothesized. Results also indicate that, contrary to the notions of investigators and jurors, witnesses become more inaccurate over time regardless of their confidence in their memories, even for highly emotional events. ^
Resumo:
We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.
Resumo:
A combination of statistical and interpolation methods and Geographic Information System (GIS) spatial analysis was used to evaluate the spatial and temporal changes in groundwater Cl− concentrations in Collier and Lee Counties (southwestern Florida), and Miami-Dade and Broward Counties (southeastern Florida), since 1985. In southwestern Florida, the average Cl− concentrations in the shallow wells (0–43 m) in Collier and Lee Counties increased from 132 mg L−1 in 1985 to 230 mg L−1 in 2000. The average Cl− concentrations in the deep wells (>43 m) of southwestern Florida increased from 392 mg L−1 in 1985 to 447 mg L−1 in 2000. Results also indicated a positive correlation between the mean sea level and Cl− concentrations and between the mean sea level and groundwater levels for the shallow wells. Concentrations in the Biscayne Aquifer (southeastern Florida) were significantly higher than those of southwestern Florida. The average Cl− concentrations increased from 159 mg L−1 in 1985 to 470 mg L−1 in 2010 for the shallow wells (<33 m) and from 1360 mg L−1 in 1985 to 2050 mg L−1 in 2010 for the deep wells (>33 m). In the Biscayne Aquifer, wells showed a positive or negative correlation between mean sea level and Cl− concentrations according to their location with respect to the saltwater intrusion line. Wells located inland behind canal control structures and west of the saltwater intrusion line showed negative correlation values, whereas wells located east of the saltwater intrusion line showed positive values. Overall, the results indicated that since 1985, there was a potential decline in the available freshwater resources estimated at about 12–17% of the available drinking-quality groundwater of the southeastern study area located in the Biscayne Aquifer.