19 resultados para Engineering, Industrial|Engineering, System Science|Operations Research

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation delivers a framework to diagnose the Bull-Whip Effect (BWE) in supply chains and then identify methods to minimize it. Such a framework is needed because in spite of the significant amount of literature discussing the bull-whip effect, many companies continue to experience the wide variations in demand that are indicative of the bull-whip effect. While the theory and knowledge of the bull-whip effect is well established, there still is the lack of an engineering framework and method to systematically identify the problem, diagnose its causes, and identify remedies. ^ The present work seeks to fill this gap by providing a holistic, systems perspective to bull-whip identification and diagnosis. The framework employs the SCOR reference model to examine the supply chain processes with a baseline measure of demand amplification. Then, research of the supply chain structural and behavioral features is conducted by means of the system dynamics modeling method. ^ The contribution of the diagnostic framework, is called Demand Amplification Protocol (DAMP), relies not only on the improvement of existent methods but also contributes with original developments introduced to accomplish successful diagnosis. DAMP contributes a comprehensive methodology that captures the dynamic complexities of supply chain processes. The method also contributes a BWE measurement method that is suitable for actual supply chains because of its low data requirements, and introduces a BWE scorecard for relating established causes to a central BWE metric. In addition, the dissertation makes a methodological contribution to the analysis of system dynamic models with a technique for statistical screening called SS-Opt, which determines the inputs with the greatest impact on the bull-whip effect by means of perturbation analysis and subsequent multivariate optimization. The dissertation describes the implementation of the DAMP framework in an actual case study that exposes the approach, analysis, results and conclusions. The case study suggests a balanced solution between costs and demand amplification can better serve both firms and supply chain interests. Insights pinpoint to supplier network redesign, postponement in manufacturing operations and collaborative forecasting agreements with main distributors.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research focuses on developing a capacity planning methodology for the emerging concurrent engineer-to-order (ETO) operations. The primary focus is placed on the capacity planning at sales stage. This study examines the characteristics of capacity planning in a concurrent ETO operation environment, models the problem analytically, and proposes a practical capacity planning methodology for concurrent ETO operations in the industry. A computer program that mimics a concurrent ETO operation environment was written to validate the proposed methodology and test a set of rules that affect the performance of a concurrent ETO operation. ^ This study takes a systems engineering approach to the problem and employs systems engineering concepts and tools for the modeling and analysis of the problem, as well as for developing a practical solution to this problem. This study depicts a concurrent ETO environment in which capacity is planned. The capacity planning problem is modeled into a mixed integer program and then solved for smaller-sized applications to evaluate its validity and solution complexity. The objective is to select the best set of available jobs to maximize the profit, while having sufficient capacity to meet each due date expectation. ^ The nature of capacity planning for concurrent ETO operations is different from other operation modes. The search for an effective solution to this problem has been an emerging research field. This study characterizes the problem of capacity planning and proposes a solution approach to the problem. This mathematical model relates work requirements to capacity over the planning horizon. The methodology is proposed for solving industry-scale problems. Along with the capacity planning methodology, a set of heuristic rules was evaluated for improving concurrent ETO planning. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on the adoption of innovations by individuals has been criticized for focusing on various factors that lead to the adoption or rejection of an innovation while ignoring important aspects of the dynamic process that takes place. Theoretical process-based models hypothesize that individuals go through consecutive stages of information gathering and decision making but do not clearly explain the mechanisms that cause an individual to leave one stage and enter the next one. Research on the dynamics of the adoption process have lacked a structurally formal and quantitative description of the process. ^ This dissertation addresses the adoption process of technological innovations from a Systems Theory perspective and assumes that individuals roam through different, not necessarily consecutive, states, determined by the levels of quantifiable state variables. It is proposed that different levels of these state variables determine the state in which potential adopters are. Various events that alter the levels of these variables can cause individuals to migrate into different states. ^ It was believed that Systems Theory could provide the required infrastructure to model the innovation adoption process, particularly applied to information technologies, in a formal, structured fashion. This dissertation assumed that an individual progressing through an adoption process could be considered a system, where the occurrence of different events affect the system's overall behavior and ultimately the adoption outcome. The research effort aimed at identifying the various states of such system and the significant events that could lead the system from one state to another. By mapping these attributes onto an “innovation adoption state space” the adoption process could be fully modeled and used to assess the status, history, and possible outcomes of a specific adoption process. ^ A group of Executive MBA students were observed as they adopted Internet-based technological innovations. The data collected were used to identify clusters in the values of the state variables and consequently define significant system states. Additionally, events were identified across the student sample that systematically moved the system from one state to another. The compilation of identified states and change-related events enabled the definition of an innovation adoption state-space model. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there has been an enormous growth of location-aware devices, such as GPS embedded cell phones, mobile sensors and radio-frequency identification tags. The age of combining sensing, processing and communication in one device, gives rise to a vast number of applications leading to endless possibilities and a realization of mobile Wireless Sensor Network (mWSN) applications. As computing, sensing and communication become more ubiquitous, trajectory privacy becomes a critical piece of information and an important factor for commercial success. While on the move, sensor nodes continuously transmit data streams of sensed values and spatiotemporal information, known as ``trajectory information". If adversaries can intercept this information, they can monitor the trajectory path and capture the location of the source node. ^ This research stems from the recognition that the wide applicability of mWSNs will remain elusive unless a trajectory privacy preservation mechanism is developed. The outcome seeks to lay a firm foundation in the field of trajectory privacy preservation in mWSNs against external and internal trajectory privacy attacks. First, to prevent external attacks, we particularly investigated a context-based trajectory privacy-aware routing protocol to prevent the eavesdropping attack. Traditional shortest-path oriented routing algorithms give adversaries the possibility to locate the target node in a certain area. We designed the novel privacy-aware routing phase and utilized the trajectory dissimilarity between mobile nodes to mislead adversaries about the location where the message started its journey. Second, to detect internal attacks, we developed a software-based attestation solution to detect compromised nodes. We created the dynamic attestation node chain among neighboring nodes to examine the memory checksum of suspicious nodes. The computation time for memory traversal had been improved compared to the previous work. Finally, we revisited the trust issue in trajectory privacy preservation mechanism designs. We used Bayesian game theory to model and analyze cooperative, selfish and malicious nodes' behaviors in trajectory privacy preservation activities.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The promise of Wireless Sensor Networks (WSNs) is the autonomous collaboration of a collection of sensors to accomplish some specific goals which a single sensor cannot offer. Basically, sensor networking serves a range of applications by providing the raw data as fundamentals for further analyses and actions. The imprecision of the collected data could tremendously mislead the decision-making process of sensor-based applications, resulting in an ineffectiveness or failure of the application objectives. Due to inherent WSN characteristics normally spoiling the raw sensor readings, many research efforts attempt to improve the accuracy of the corrupted or "dirty" sensor data. The dirty data need to be cleaned or corrected. However, the developed data cleaning solutions restrict themselves to the scope of static WSNs where deployed sensors would rarely move during the operation. Nowadays, many emerging applications relying on WSNs need the sensor mobility to enhance the application efficiency and usage flexibility. The location of deployed sensors needs to be dynamic. Also, each sensor would independently function and contribute its resources. Sensors equipped with vehicles for monitoring the traffic condition could be depicted as one of the prospective examples. The sensor mobility causes a transient in network topology and correlation among sensor streams. Based on static relationships among sensors, the existing methods for cleaning sensor data in static WSNs are invalid in such mobile scenarios. Therefore, a solution of data cleaning that considers the sensor movements is actively needed. This dissertation aims to improve the quality of sensor data by considering the consequences of various trajectory relationships of autonomous mobile sensors in the system. First of all, we address the dynamic network topology due to sensor mobility. The concept of virtual sensor is presented and used for spatio-temporal selection of neighboring sensors to help in cleaning sensor data streams. This method is one of the first methods to clean data in mobile sensor environments. We also study the mobility pattern of moving sensors relative to boundaries of sub-areas of interest. We developed a belief-based analysis to determine the reliable sets of neighboring sensors to improve the cleaning performance, especially when node density is relatively low. Finally, we design a novel sketch-based technique to clean data from internal sensors where spatio-temporal relationships among sensors cannot lead to the data correlations among sensor streams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent intervention efforts in promoting positive identity in troubled adolescents have begun to draw on the potential for an integration of the self-construction and self-discovery perspectives in conceptualizing identity processes, as well as the integration of quantitative and qualitative data analytic strategies. This study reports an investigation of the Changing Lives Program (CLP), using an Outcome Mediation (OM) evaluation model, an integrated model for evaluating targets of intervention, while theoretically including a Self-Transformative Model of Identity Development (STM), a proposed integration of self-discovery and self-construction identity processes. This study also used a Relational Data Analysis (RDA) integration of quantitative and qualitative analysis strategies and a structural equation modeling approach (SEM), to construct and evaluate the hypothesized OM/STM model. The CLP is a community supported positive youth development intervention, targeting multi-problem youth in alternative high schools in the Miami Dade County Public Schools (M-DCPS). The 259 participants for this study were drawn from the CLP’s archival data file. The model evaluated in this study utilized three indices of core identity processes (1) personal expressiveness, (2) identity conflict resolution, and (3) informational identity style that were conceptualized as mediators of the effects of participation in the CLP on change in two qualitative outcome indices of participants’ sense of self and identity. Findings indicated the model fit the data (χ2 (10) = 3.638, p = .96; RMSEA = .00; CFI = 1.00; WRMR = .299). The pattern of findings supported the utilization of the STM in conceptualizing identity processes and provided support for the OM design. The findings also suggested the need for methods capable of detecting and rendering unique sample specific free response data to increase the likelihood of identifying emergent core developmental research concepts and constructs in studies of intervention/developmental change over time in ways not possible using fixed response methods alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parameter design is an experimental design and analysis methodology for developing robust processes and products. Robustness implies insensitivity to noise disturbances. Subtle experimental realities, such as the joint effect of process knowledge and analysis methodology, may affect the effectiveness of parameter design in precision engineering; where the objective is to detect minute variation in product and process performance. In this thesis, approaches to statistical forced-noise design and analysis methodologies were investigated with respect to detecting performance variations. Given a low degree of process knowledge, Taguchi's methodology of signal-to-noise ratio analysis was found to be more suitable in detecting minute performance variations than the classical approach based on polynomial decomposition. Comparison of inner-array noise (IAN) and outer-array noise (OAN) structuring approaches showed that OAN is a more efficient design for precision engineering. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enterprise Resource Planning (ERP) systems are software programs designed to integrate the functional requirements, and operational information needs of a business. Pressures of competition and entry standards for participation in major manufacturing supply chains are creating greater demand for small business ERP systems. The proliferation of new offerings of ERP systems introduces complexity to the selection process to identify the right ERP business software for a small and medium-sized enterprise (SME). The selection of an ERP system is a process in which a faulty conclusion poses a significant risk of failure to SME’s. The literature reveals that there are still very high failure rates in ERP implementation, and that faulty selection processes contribute to this failure rate. However, the literature is devoid of a systematic methodology for the selection process for an ERP system by SME’s. This study provides a methodological approach to selecting the right ERP system for a small or medium-sized enterprise. The study employs Thomann’s meta-methodology for methodology development; a survey of SME’s is conducted to inform the development of the methodology, and a case study is employed to test, and revise the new methodology. The study shows that a rigorously developed, effective methodology that includes benchmarking experiences has been developed and successfully employed. It is verified that the methodology may be applied to the domain of users it was developed to serve, and that the test results are validated by expert users and stakeholders. Future research should investigate in greater detail the application of meta-methodologies to supplier selection and evaluation processes for services and software; additional research into the purchasing practices of small firms is clearly needed.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Construction organizations typically deal with large volumes of project data containing valuable information. It is found that these organizations do not use these data effectively for planning and decision-making. There are two reasons. First, the information systems in construction organizations are designed to support day-to-day construction operations. The data stored in these systems are often non-validated, non-integrated and are available in a format that makes it difficult for decision makers to use in order to make timely decisions. Second, the organizational structure and the IT infrastructure are often not compatible with the information systems thereby resulting in higher operational costs and lower productivity. These two issues have been investigated in this research with the objective of developing systems that are structured for effective decision-making. ^ A framework was developed to guide storage and retrieval of validated and integrated data for timely decision-making and to enable construction organizations to redesign their organizational structure and IT infrastructure matched with information system capabilities. The research was focused on construction owner organizations that were continuously involved in multiple construction projects. Action research and Data warehousing techniques were used to develop the framework. ^ One hundred and sixty-three construction owner organizations were surveyed in order to assess their data needs, data management practices and extent of use of information systems in planning and decision-making. For in-depth analysis, Miami-Dade Transit (MDT) was selected which is in-charge of all transportation-related construction projects in the Miami-Dade county. A functional model and a prototype system were developed to test the framework. The results revealed significant improvements in data management and decision-support operations that were examined through various qualitative (ease in data access, data quality, response time, productivity improvement, etc.) and quantitative (time savings and operational cost savings) measures. The research results were first validated by MDT and then by a representative group of twenty construction owner organizations involved in various types of construction projects. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major consequence of contamination at the local level’s population as it relates to environmental health and environmental engineering is childhood lead poisoning. Environmental contamination is one of the pressing environmental concerns facing the world today. Current approaches often focus on large contaminated industrial size sites that are designated by regulatory agencies for site remediation. Prior to this study, there were no known published studies conducted at the local and smaller scale, such as neighborhoods, where often much of the contamination is present to remediate. An environmental health study of local lead-poisoning data in Liberty City, Little Haiti and eastern Little Havana in Miami-Dade County, Florida accounted for a disproportionately high number of the county’s reported childhood lead poisoning cases. An engineering system was developed and designed for a comprehensive risk management methodology that is distinctively applicable to the geographical and environmental conditions of Miami-Dade County, Florida. Furthermore, a scientific approach for interpreting environmental health concerns, while involving detailed environmental engineering control measures and methods for site remediation in contained media was developed for implementation. Test samples were obtained from residents and sites in those specific communities in Miami-Dade County, Florida (Gasana and Chamorro 2002). Currently lead does not have an Oral Assessment, Inhalation Assessment, and Oral Slope Factor; variables that are required to run a quantitative risk assessment. However, various institutional controls from federal agencies’ standards and regulation for contaminated lead in media yield adequate maximum concentration limits (MCLs). For this study an MCL of .0015 (mg/L) was used. A risk management approach concerning contaminated media involving lead demonstrates that the linkage of environmental health and environmental engineering can yield a feasible solution.