928 resultados para cost-aware process design
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability (Bandilla et al., 2003; Dillman, 2000; Kwak and Radler, 2002). Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Despite this, and the introduction of numerous tools to support online-questionnaire creation, current electronic survey design typically replicates that of paper-based questionnaires, failing to harness the full power of the electronic delivery medium. Worse, a recent environmental scan of online-questionnaire design tools found that little, if any, support is incorporated within these tools to guide questionnaire designers according to best-practice (Lumsden and Morgan, 2005). This article introduces a comprehensive set of guidelines - a practical reference guide - for the design of online-questionnaires.
Resumo:
This thesis explores the interaction between Micros (<10 employees) from non-creative sectors and website designers ("Creatives") that occurred when creating a website of a higher order than a basic template site. The research used Straussian Grounded Theory Method with a longitudinal design, in order to identify what knowledge transferred to the Micros during the collaboration, how it transferred, what factors affected the transfer and outcomes of the transfer including behavioural additionality. To identify whether the research could be extended beyond this, five other design areas were also examined, as well as five Small to Medium Enterprises (SMEs) engaged in website and branding projects. The findings were that, at the start of the design process, many Micros could not articulate their customer knowledge, and had poor marketing and visual language skills, knowledge core to web design, enabling targeted communication to customers through images. Despite these gaps, most Micros still tried to lead the process. To overcome this disjoint, the majority of the designers used a knowledge transfer strategy termed in this thesis as ‘Bi-Modal Knowledge Transfer’, where the Creative was aware of the transfer but the Micro was unaware, both for drawing out customer knowledge from the Micro and for transferring visual language skills to the Micro. Two models were developed to represent this process. Two models were also created to map changes in the knowledge landscapes of customer knowledge and visual language – the Knowledge Placement Model and the Visual Language Scale. The Knowledge Placement model was used to map the placement of customer knowledge within the consciousness, extending the known Automatic-Unconscious -Conscious model, adding two more locations – Peripheral Consciousness and Occasional Consciousness. Peripheral Consciousness is where potential knowledge is held, but not used. Occasional Consciousness is where potential knowledge is held but used only for specific tasks. The Visual Language Scale was created to measure visual language ability from visually responsive, where the participant only responds personally to visual symbols, to visually multi-lingual, where the participant can use visual symbols to communicate with multiple thought-worlds. With successful Bi-Modal Knowledge Transfer, the outcome included not only an effective website but also changes in the knowledge landscape for the Micros and ongoing behavioural changes, especially in marketing. These effects were not seen in the other design projects, and only in two of the SME projects. The key factors for this difference between SMEs and Micros appeared to be an expectation of knowledge by the Creatives and failure by the SMEs to transfer knowledge within the company.
Resumo:
An automated cognitive approach for the design of Information Systems is presented. It is supposed to be used at the very beginning of the design process, between the stages of requirements determination and analysis, including the stage of analysis. In the context of the approach used either UML or ERD notations may be used for model representation. The approach provides the opportunity of using natural language text documents as a source of knowledge for automated problem domain model generation. It also simplifies the process of modelling by assisting the human user during the whole period of working upon the model (using UML or ERD notations).
Resumo:
Workflows are set of activities that implement and realise business goals. Modern business goals add extra requirements on workflow systems and their management. Workflows may cross many organisations and utilise services on a variety of devices and/or supported by different platforms. Current workflows are therefore inherently context-aware. Each context is governed and constrained by its own policies and rules to prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services and/or data. We present a sound and multi-layered design language for the design and analysis of secure and context aware workflows systems.
Resumo:
Hazardous radioactive liquid waste is the legacy of more than 50 years of plutonium production associated with the United States' nuclear weapons program. It is estimated that more than 245,000 tons of nitrate wastes are stored at facilities such as the single-shell tanks (SST) at the Hanford Site in the state of Washington, and the Melton Valley storage tanks at Oak Ridge National Laboratory (ORNL) in Tennessee. In order to develop an innovative, new technology for the destruction and immobilization of nitrate-based radioactive liquid waste, the United State Department of Energy (DOE) initiated the research project which resulted in the technology known as the Nitrate to Ammonia and Ceramic (NAC) process. However, inasmuch as the nitrate anion is highly mobile and difficult to immobilize, especially in relatively porous cement-based grout which has been used to date as a method for the immobilization of liquid waste, it presents a major obstacle to environmental clean-up initiatives. Thus, in an effort to contribute to the existing body of knowledge and enhance the efficacy of the NAC process, this research involved the experimental measurement of the rheological and heat transfer behaviors of the NAC product slurry and the determination of the optimal operating parameters for the continuous NAC chemical reaction process. Test results indicate that the NAC product slurry exhibits a typical non-Newtonian flow behavior. Correlation equations for the slurry's rheological properties and heat transfer rate in a pipe flow have been developed; these should prove valuable in the design of a full-scale NAC processing plant. The 20-percent slurry exhibited a typical dilatant (shear thickening) behavior and was in the turbulent flow regime due to its lower viscosity. The 40-percent slurry exhibited a typical pseudoplastic (shear thinning) behavior and remained in the laminar flow regime throughout its experimental range. The reactions were found to be more efficient in the lower temperature range investigated. With respect to leachability, the experimental final NAC ceramic waste form is comparable to the final product of vitrification, the technology chosen by DOE to treat these wastes. As the NAC process has the potential of reducing the volume of nitrate-based radioactive liquid waste by as much as 70 percent, it not only promises to enhance environmental remediation efforts but also effect substantial cost savings. ^
Resumo:
Antenna design is an iterative process in which structures are analyzed and changed to comply with certain performance parameters required. The classic approach starts with analyzing a "known" structure, obtaining the value of its performance parameter and changing this structure until the "target" value is achieved. This process relies on having an initial structure, which follows some known or "intuitive" patterns already familiar to the designer. The purpose of this research was to develop a method of designing UWB antennas. What is new in this proposal is that the design process is reversed: the designer will start with the target performance parameter and obtain a structure as the result of the design process. This method provided a new way to replicate and optimize existing performance parameters. The base of the method was the use of a Genetic Algorithm (GA) adapted to the format of the chromosome that will be evaluated by the Electromagnetic (EM) solver. For the electromagnetic study we used XFDTD™ program, based in the Finite-Difference Time-Domain technique. The programming portion of the method was created under the MatLab environment, which serves as the interface for converting chromosomes, file formats and transferring of data between the XFDTD™ and GA. A high level of customization had to be written into the code to work with the specific files generated by the XFDTD™ program. Two types of cost functions were evaluated; the first one seeking broadband performance within the UWB band, and the second one searching for curve replication of a reference geometry. The performance of the method was evaluated considering the speed provided by the computer resources used. Balance between accuracy, data file size and speed of execution was achieved by defining parameters in the GA code as well as changing the internal parameters of the XFDTD™ projects. The results showed that the GA produced geometries that were analyzed by the XFDTD™ program and changed following the search criteria until reaching the target value of the cost function. Results also showed how the parameters can change the search criteria and influence the running of the code to provide a variety of geometries.
Resumo:
This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Design optimization of modern machine drive systems for maximum fault tolerant and optimal operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^
Resumo:
E=MC³ Energy Equals Management's Continued Cost Concern, is an essay written by Fritz G. Hagenmeyer, Associate Professor, School of Hospitality Management at Florida International University. In the writing, Hagenmeyer initially tenders: “Energy problems in the hospitality industry can be contained or reduced, yielding elevated profits as a result of applied, quality management principles. The concepts, processes and procedures presented in this article are intended to aid present and future managers to become more effective with a sharpened focus on profitability.” This article is an overview of energy efficiency and the management of such. In an expanding energy consumption market with its escalating costs, energy management has become an ever increasing concern and component of responsible hospitality management, Hagenmeyer will have you know. “In endeavoring to "manage" on a day-to-day basis a functioning hospitality building's energy system, the person in charge must take on the role of Justice with her scales, attempting to balance the often varying comfort needs of guests and occupants with the invariable rising costs of energy utilized to generate and maintain such comfort conditions, since comfort is seen as an integral part of the "service," "product," or "price/value” perception of patrons,” says Hagenmeyer. In contrast to what was thought in the mid point of this century - that energy would be abundant and cheap - the reality has set-in that this is not the case; not by a long shot. The author wants you to be aware that energy costs in buildings are a force to be reckoned with; a major expense to be sure. “Since 1973, "energy-conscious design" has begun to become part of the repertoire of architects, design engineers, and construction companies,” Hagenmeyer states. “For instance, whereas office buildings of the early 1970s might have used 400,000 British Thermal Units (BTUs) per square foot year, new buildings are going up that use 55,000 to 65,000 BTUs per square foot year,” Hagenmeyer, like an incandescent bulb, illuminates you. Hagenmeyer references Robert E. Aulbach’s article - Energy Management – when informing you that the hospitality manager should not become complacent in addressing the energy cost issue, but should and must maintain a diligent focus on the problem. Hagenmeyer also makes reference to the Middle East War and to OPEC, and their influence on energy prices. In closing, Hagenmeyer suggests an - Energy Management Action Plan – which he outlines for you.
Resumo:
The study of the private management of public housing is an important topic to be critically analyzed as the government search for ways to increase efficiency in providing housing for the poor. Public Housing Authorities must address the cost for repairing or replacing the deteriorating housing stock, the increase in the need for affordable housing, and the lack of supply. There is growing pressure on efficient use of public funds that has heightened the need for profound structural reform. An important strategy for carrying out such reform is through privatization. Although privatization does not work in every case, the majority position in the traditional privatization literature is that reliance on private organizations normally, but not always, results in cost savings. ^ The primary purpose of this dissertation is to determine whether a consensus exist among decision-makers on the efficiency of privatizing the management of public housing. A secondary purpose is to review the techniques (best practices) used by the private sector that results in cost-efficiencies in the management of public housing. The study employs the use of a triangulated research design utilizing cross-sectional survey methodology that included use of a survey instrument to solicit responses from the private managers. The study consists of qualitative methods using interviews from key informants of private-sector management firms and public housing agencies, case studies, focus groups, archival records and housing authorities documents. ^ Results indicated that important decision-makers perceive that private managers made a positive contribution to cost-efficiencies in the management of public housing. The performance of private contractors served as a yardstick for comparison of efficiency of services that are produced in-house. The study concluded that private managers made the benefits of their management techniques well known creating a sense of competition between public and private managers. Competition from private contractors spurred municipal worker and management productivity improvements creating better management results for the public housing authorities. The study results are in concert with a review of recent research and studies that also concluded private managers have some distinct advantages to controlling costs in the management of public housing. ^
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
With the developments in computing and communication technologies, wireless sensor networks have become popular in wide range of application areas such as health, military, environment and habitant monitoring. Moreover, wireless acoustic sensor networks have been widely used for target tracking applications due to their passive nature, reliability and low cost. Traditionally, acoustic sensor arrays built in linear, circular or other regular shapes are used for tracking acoustic sources. The maintaining of relative geometry of the acoustic sensors in the array is vital for accurate target tracking, which greatly reduces the flexibility of the sensor network. To overcome this limitation, we propose using only a single acoustic sensor at each sensor node. This design greatly improves the flexibility of the sensor network and makes it possible to deploy the sensor network in remote or hostile regions through air-drop or other stealth approaches. Acoustic arrays are capable of performing the target localization or generating the bearing estimations on their own. However, with only a single acoustic sensor, the sensor nodes will not be able to generate such measurements. Thus, self-organization of sensor nodes into virtual arrays to perform the target localization is essential. We developed an energy-efficient and distributed self-organization algorithm for target tracking using wireless acoustic sensor networks. The major error sources of the localization process were studied, and an energy-aware node selection criterion was developed to minimize the target localization errors. Using this node selection criterion, the self-organization algorithm selects a near-optimal localization sensor group to minimize the target tracking errors. In addition, a message passing protocol was developed to implement the self-organization algorithm in a distributed manner. In order to achieve extended sensor network lifetime, energy conservation was incorporated into the self-organization algorithm by incorporating a sleep-wakeup management mechanism with a novel cross layer adaptive wakeup probability adjustment scheme. The simulation results confirm that the developed self-organization algorithm provides satisfactory target tracking performance. Moreover, the energy saving analysis confirms the effectiveness of the cross layer power management scheme in achieving extended sensor network lifetime without degrading the target tracking performance.
Resumo:
With the developments in computing and communication technologies, wireless sensor networks have become popular in wide range of application areas such as health, military, environment and habitant monitoring. Moreover, wireless acoustic sensor networks have been widely used for target tracking applications due to their passive nature, reliability and low cost. Traditionally, acoustic sensor arrays built in linear, circular or other regular shapes are used for tracking acoustic sources. The maintaining of relative geometry of the acoustic sensors in the array is vital for accurate target tracking, which greatly reduces the flexibility of the sensor network. To overcome this limitation, we propose using only a single acoustic sensor at each sensor node. This design greatly improves the flexibility of the sensor network and makes it possible to deploy the sensor network in remote or hostile regions through air-drop or other stealth approaches. Acoustic arrays are capable of performing the target localization or generating the bearing estimations on their own. However, with only a single acoustic sensor, the sensor nodes will not be able to generate such measurements. Thus, self-organization of sensor nodes into virtual arrays to perform the target localization is essential. We developed an energy-efficient and distributed self-organization algorithm for target tracking using wireless acoustic sensor networks. The major error sources of the localization process were studied, and an energy-aware node selection criterion was developed to minimize the target localization errors. Using this node selection criterion, the self-organization algorithm selects a near-optimal localization sensor group to minimize the target tracking errors. In addition, a message passing protocol was developed to implement the self-organization algorithm in a distributed manner. In order to achieve extended sensor network lifetime, energy conservation was incorporated into the self-organization algorithm by incorporating a sleep-wakeup management mechanism with a novel cross layer adaptive wakeup probability adjustment scheme. The simulation results confirm that the developed self-organization algorithm provides satisfactory target tracking performance. Moreover, the energy saving analysis confirms the effectiveness of the cross layer power management scheme in achieving extended sensor network lifetime without degrading the target tracking performance.