21 resultados para Multi- Choice mixed integer goal programming
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
Bus stops are key links in the journeys of transit patrons with disabilities. Inaccessible bus stops prevent people with disabilities from using fixed-route bus services, thus limiting their mobility. The Americans with Disabilities Act (ADA) of 1990 prescribes the minimum requirements for bus stop accessibility by riders with disabilities. Due to limited budgets, transit agencies can only select a limited number of bus stop locations for ADA improvements annually. These locations should preferably be selected such that they maximize the overall benefits to patrons with disabilities. In addition, transit agencies may also choose to implement the universal design paradigm, which involves higher design standards than current ADA requirements and can provide amenities that are useful for all riders, like shelters and lighting. Many factors can affect the decision to improve a bus stop, including rider-based aspects like the number of riders with disabilities, total ridership, customer complaints, accidents, deployment costs, as well as locational aspects like the location of employment centers, schools, shopping areas, and so on. These interlacing factors make it difficult to identify optimum improvement locations without the aid of an optimization model. This dissertation proposes two integer programming models to help identify a priority list of bus stops for accessibility improvements. The first is a binary integer programming model designed to identify bus stops that need improvements to meet the minimum ADA requirements. The second involves a multi-objective nonlinear mixed integer programming model that attempts to achieve an optimal compromise among the two accessibility design standards. Geographic Information System (GIS) techniques were used extensively to both prepare the model input and examine the model output. An analytic hierarchy process (AHP) was applied to combine all of the factors affecting the benefits to patrons with disabilities. An extensive sensitivity analysis was performed to assess the reasonableness of the model outputs in response to changes in model constraints. Based on a case study using data from Broward County Transit (BCT) in Florida, the models were found to produce a list of bus stops that upon close examination were determined to be highly logical. Compared to traditional approaches using staff experience, requests from elected officials, customer complaints, etc., these optimization models offer a more objective and efficient platform on which to make bus stop improvement suggestions.
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.
Resumo:
Since the seminal works of Markowitz (1952), Sharpe (1964), and Lintner (1965), numerous studies on portfolio selection and performance measure have been based upon the mean-variance framework. However, several researchers (e.g., Arditti (1967, and 1971), Samuelson (1970), and Rubinstein (1973)) argue that the higher moments cannot be neglected unless there is reason to believe that: (i) the asset returns are normally distributed and the investor's utility function is quadratic, or (ii) the empirical evidence demonstrates that higher moments are irrelevant to the investor's decision. Based on the same argument, this dissertation investigates the impact of higher moments of return distributions on three issues concerning the 14 international stock markets.^ First, the portfolio selection with skewness is determined using: the Polynomial Goal Programming in which investor preferences for skewness can be incorporated. The empirical findings suggest that the return distributions of international stock markets are not normally distributed, and that the incorporation of skewness into an investor's portfolio decision causes a major change in the construction of his optimal portfolio. The evidence also indicates that an investor will trade expected return of the portfolio for skewness. Moreover, when short sales are allowed, investors are better off as they attain higher expected return and skewness simultaneously.^ Second, the performance of international stock markets are evaluated using two types of performance measures: (i) the two-moment performance measures of Sharpe (1966), and Treynor (1965), and (ii) the higher-moment performance measures of Prakash and Bear (1986), and Stephens and Proffitt (1991). The empirical evidence indicates that higher moments of return distributions are significant and relevant to the investor's decision. Thus, the higher moment performance measures should be more appropriate to evaluate the performances of international stock markets. The evidence also indicates that various measures provide a vastly different performance ranking of the markets, albeit in the same direction.^ Finally, the inter-temporal stability of the international stock markets is investigated using the Parhizgari and Prakash (1989) algorithm for the Sen and Puri (1968) test which accounts for non-normality of return distributions. The empirical finding indicates that there is strong evidence to support the stability in international stock market movements. However, when the Anderson test which assumes normality of return distributions is employed, the stability in the correlation structure is rejected. This suggests that the non-normality of the return distribution is an important factor that cannot be ignored in the investigation of inter-temporal stability of international stock markets. ^
Resumo:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
This research focuses on developing a capacity planning methodology for the emerging concurrent engineer-to-order (ETO) operations. The primary focus is placed on the capacity planning at sales stage. This study examines the characteristics of capacity planning in a concurrent ETO operation environment, models the problem analytically, and proposes a practical capacity planning methodology for concurrent ETO operations in the industry. A computer program that mimics a concurrent ETO operation environment was written to validate the proposed methodology and test a set of rules that affect the performance of a concurrent ETO operation. ^ This study takes a systems engineering approach to the problem and employs systems engineering concepts and tools for the modeling and analysis of the problem, as well as for developing a practical solution to this problem. This study depicts a concurrent ETO environment in which capacity is planned. The capacity planning problem is modeled into a mixed integer program and then solved for smaller-sized applications to evaluate its validity and solution complexity. The objective is to select the best set of available jobs to maximize the profit, while having sufficient capacity to meet each due date expectation. ^ The nature of capacity planning for concurrent ETO operations is different from other operation modes. The search for an effective solution to this problem has been an emerging research field. This study characterizes the problem of capacity planning and proposes a solution approach to the problem. This mathematical model relates work requirements to capacity over the planning horizon. The methodology is proposed for solving industry-scale problems. Along with the capacity planning methodology, a set of heuristic rules was evaluated for improving concurrent ETO planning. ^
Resumo:
There is a growing body of literature that provides evidence for the efficacy of positive youth development programs in general and preliminary empirical support for the efficacy of the Changing Lives Program (CLP) in particular. This dissertation sought to extend previous efforts to develop and preliminarily examine the Transformative Goal Attainment Scale (TGAS) as a measure of participant empowerment in the promotion of positive development. Consistent with recent advances in the use of qualitative research methods, this dissertation sought to further investigate the utility of Relational Data Analysis (RDA) for providing categorizations of qualitative open-ended response data. In particular, a qualitative index of Transformative Goals, TG, was developed to complement the previously developed quantitative index of Transformative Goal Attainment (TGA), and RDA procedures for calculating reliability and content validity were refined. Second, as a Stage I pilot/feasibility study this study preliminarily examined the potentially mediating role of empowerment, as indexed by the TGAS, in the promotion of positive development. ^ Fifty-seven participants took part in this study, forty CLP intervention participants and seventeen control condition participants. All 57 participants were administered the study's measures just prior to and just following the fall 2003 semester. This study thus used a short-term longitudinal quasi-experimental research design with a comparison control group. ^ RDA procedures were refined and applied to the categorization of open-ended response data regarding participants' transformative goals (TG) and future possible selves (PSQ-QE). These analyses revealed relatively strong, indirect evidence for the construct validity of the categories as well as their theoretically meaningful structural organization, thereby providing sufficient support for the utility of RDA procedures in the categorization of qualitative open-ended response data. ^ In addition, transformative goals (TG) and future possible selves (PSQ-QE), and the quantitative index of perceived goal attainment (TGA) were evaluated as potential mediators of positive development by testing their relationships to other indices of positive intervention outcome within a four-step method involving both analysis of variance (ANOVA and RMANOVAs) and regression analysis. Though more limited in scope than the efforts at the development and refinement of the measures of these mediators, the results were also promising. ^
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
Although calorie information at the point-of-purchase at fast food restaurants is proposed as a method to decrease calorie choices and combat obesity, research results have been mixed. Much of the supportive research has weak methodology, and is limited. There is a demonstrated need to develop better techniques to assist consumers to make lower calorie food choices. Eating at fast food restaurants has been positively associated with weight gain. The current study explored the possibility of adding exercise equivalents (EE) (physical activity required to burn off the calories in the food), along with calorie information as a possible way to facilitate lower calorie choice at the point-of-choice in fast food restaurants. This three-group experimental study, in 18-34 year old, overweight and obese women, examines whether presenting caloric information in the form of EE at the point-of-choice at fast food restaurants, will lead to lower calorie food choices compared to presenting simple caloric information or no information at all. Methods. A randomized repeated measures experiment was conducted. Participants ordered a fast food meal from Burger King with menus that contained only the names of the food choices (Lunch 1). One week later (Lunch 2), study participants were given one of three menus that varied: no information, calorie information, or calorie information and EE. Study participants included 62 college aged students. Additionally, the study controlled for dietary restraint by blocking participants, before randomization, to the three groups. Results. A repeated measures analysis of variance was conducted. The study was not sufficiently powered, and while the study was designed to determine large effect sizes, a small effect size of .026, was determined. No significant differences were found in the foods ordered among the various menu conditions. Conclusion. Menu labeling alone might not be enough to reduce calories at the point-of-choice at restaurants. Additional research is necessary to determine if calorie information and EE at the point-of-choice would lead to fewer calories chosen at a meal. Studies should also look at long-term, repeated exposure to determine the effectiveness of calories and or EE at the point-of-choice at fast food restaurants.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
The extant literature had studied the determinants of the firms’ location decisions with help of host country characteristics and distances between home and host countries. Firm resources and its internationalization strategies had found limited attention in this literature. To address this gap, the research question in this dissertation was whether and how firms’ resources and internationalization strategies impacted the international location decisions of emerging market firms. ^ To explore the research question, data were hand-collected from Indian software firms on their location decisions taken between April 2000 and March 2009. To analyze the multi-level longitudinal dataset, hierarchical linear modeling was used. The results showed that the internationalization strategies, namely market-seeking or labor-seeking had direct impact on firms’ location decision. This direct relationship was moderated by firm resource which, in case of Indian software firms, was the appraisal at CMMI level-5. Indian software firms located in developed countries with a market-seeking strategy and in emerging markets with a labor-seeking strategy. However, software firms with resource such as CMMI level-5 appraisal, when in a labor-seeking mode, were more likely to locate in a developed country over emerging market than firms without the appraisal. Software firms with CMMI level-5 appraisal, when in market-seeking mode, were more likely to locate in a developed country over an emerging market than firms without the appraisal. ^ It was concluded that the internationalization strategies and resources of companies predicted their location choices, over and above the variables studied in the theoretical field of location determinants.^
Resumo:
Environmentally conscious construction has received a significant amount of research attention during the last decades. Even though construction literature is rich in studies that emphasize the importance of environmental impact during the construction phase, most of the previous studies failed to combine environmental analysis with other project performance criteria in construction. This is mainly because most of the studies have overlooked the multi-objective nature of construction projects. In order to achieve environmentally conscious construction, multi-objectives and their relationships need to be successfully analyzed in the complex construction environment. The complex construction system is composed of changing project conditions that have an impact on the relationship between time, cost and environmental impact (TCEI) of construction operations. Yet, this impact is still unknown by construction professionals. Studying this impact is vital to fulfill multiple project objectives and achieve environmentally conscious construction. This research proposes an analytical framework to analyze the impact of changing project conditions on the relationship of TCEI. This study includes green house gas (GHG) emissions as an environmental impact category. The methodology utilizes multi-agent systems, multi-objective optimization, analytical network process, and system dynamics tools to study the relationships of TCEI and support decision-making under the influence of project conditions. Life cycle assessment (LCA) is applied to the evaluation of environmental impact in terms of GHG. The mixed method approach allowed for the collection and analysis of qualitative and quantitative data. Structured interviews of professionals in the highway construction field were conducted to gain their perspectives in decision-making under the influence of certain project conditions, while the quantitative data were collected from the Florida Department of Transportation (FDOT) for highway resurfacing projects. The data collected were used to test the framework. The framework yielded statistically significant results in simulating project conditions and optimizing TCEI. The results showed that the change in project conditions had a significant impact on the TCEI optimal solutions. The correlation between TCEI suggested that they affected each other positively, but in different strengths. The findings of the study will assist contractors to visualize the impact of their decision on the relationship of TCEI.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.