891 resultados para Process Modelling, Process Management, Risk Modelling
Resumo:
Over the last decade the English planning system has placed greater emphasis on the financial viability of development. ‘Calculative’ practices have been used to quantify and capture land value uplifts. Development viability appraisal (DVA) has become a key part of the evidence base used in planning decision-making and informs both ‘site-specific’ negotiations about the level of land value capture for individual schemes and ‘area-wide’ planning policy formation. This paper investigates how implementation of DVA is governed in planning policy formation. It is argued that the increased use of DVA raises important questions about how planning decisions are made and operationalised, not least because DVA is often poorly understood by some key stakeholders. The paper uses the concept of governance to thematically analyse semi-structured interviews conducted with the producers of DVAs and considers key procedural issues including (in)consistencies in appraisal practices, levels of stakeholder consultation and the potential for client and producer bias. Whilst stakeholder consultation is shown to be integral to the appraisal process in order to improve the quality of the appraisals and to legitimise the outputs, participation is restricted to industry experts and excludes some interest groups, including local communities. It is concluded that, largely because of its recent adoption and knowledge asymmetries between local planning authorities and appraisers, DVA is a weakly governed process characterised by emerging and contested guidance and is therefore ‘up for grabs’.
Resumo:
Nutrient enrichment and drought conditions are major threats to lowland rivers causing ecosystem degradation and composition changes in plant communities. The controls on primary producer composition in chalk rivers are investigated using a new model and existing data from the River Frome (UK) to explore abiotic and biotic interactions. The growth and interaction of four primary producer functional groups (suspended algae, macrophytes, epiphytes, sediment biofilm) were successfully linked with flow, nutrients (N, P), light and water temperature such that the modelled biomass dynamics of the four groups matched that of the observed. Simulated growth of suspended algae was limited mainly by the residence time of the river rather than in-stream phosphorus concentrations. The simulated growth of the fixed vegetation (macrophytes, epiphytes, sediment biofilm) was overwhelmingly controlled by incoming solar radiation and light attenuation in the water column. Nutrients and grazing have little control when compared to the other physical controls in the simulations. A number of environmental threshold values were identified in the model simulations for the different producer types. The simulation results highlighted the importance of the pelagic–benthic interactions within the River Frome and indicated that process interaction defined the behaviour of the primary producers, rather than a single, dominant driver. The model simulations pose interesting questions to be considered in the next iteration of field- and laboratory based studies.
Resumo:
The personalised conditioning system (PCS) is widely studied. Potentially, it is able to reduce energy consumption while securing occupants’ thermal comfort requirements. It has been suggested that automatic optimised operation schemes for PCS should be introduced to avoid energy wastage and discomfort caused by inappropriate operation. In certain automatic operation schemes, personalised thermal sensation models are applied as key components to help in setting targets for PCS operation. In this research, a novel personal thermal sensation modelling method based on the C-Support Vector Classification (C-SVC) algorithm has been developed for PCS control. The personal thermal sensation modelling has been regarded as a classification problem. During the modelling process, the method ‘learns’ an occupant’s thermal preferences from his/her feedback, environmental parameters and personal physiological and behavioural factors. The modelling method has been verified by comparing the actual thermal sensation vote (TSV) with the modelled one based on 20 individual cases. Furthermore, the accuracy of each individual thermal sensation model has been compared with the outcomes of the PMV model. The results indicate that the modelling method presented in this paper is an effective tool to model personal thermal sensations and could be integrated within the PCS for optimised system operation and control.
Resumo:
In this work, thermodynamic models for fitting the phase equilibrium of binary systems were applied, aiming to predict the high pressure phase equilibrium of multicomponent systems of interest in the food engineering field, comparing the results generated by the models with new experimental data and with those from the literature. Two mixing rules were used with the Peng-Robinson equation of state, one with the mixing rule of van der Waals and the other with the composition-dependent mixing rule of Mathias et al. The systems chosen are of fundamental importance in food industries, such as the binary systems CO(2)-limonene, CO(2)-citral and CO(2)-linalool, and the ternary systems CO(2)-Limonene-Citral and CO(2)-Limonene-Linalool, where high pressure phase equilibrium knowledge is important to extract and fractionate citrus fruit essential oils. For the CO(2)-limonene system, some experimental data were also measured in this work. The results showed the high capability of the model using the composition-dependent mixing rule to model the phase equilibrium behavior of these systems.
Resumo:
The subject of this paper is the secular behaviour of a pair of planets evolving under dissipative forces. In particular, we investigate the case when dissipative forces affect the planetary semimajor axes and the planets move inwards/outwards the central star, in a process known as planet migration. To perform this investigation, we introduce fundamental concepts of conservative and dissipative dynamics of the three-body problem. Based on these concepts, we develop a qualitative model of the secular evolution of the migrating planetary pair. Our approach is based on the analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces is invoked. We show that, under the assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the Mode I and Mode II stationary solutions of the conservative secular problem. The ultimate convergence and the evolution of the system along one of these secular modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the proper secular frequency of the system. We show that it is possible to reassemble the starting configurations and the migration history of the systems on the basis of their final states and consequently to constrain the parameters of the physical processes involved.
Resumo:
Canonical Monte Carlo simulations for the Au(210)/H(2)O interface, using a force field recently proposed by us, are reported. The results exhibit the main features normally observed in simulations of water molecules in contact with different noble metal surfaces. The calculations also assess the influence of the surface topography on the structural aspects of the adsorbed water and on the distribution of the water molecules in the direction normal to the metal surface plane. The adsorption process is preferential at sites in the first layer of the metal. The analysis of the density profiles and dipole moment distributions points to two predominant orientations. Most of the molecules are adsorbed with the molecular plane parallel to surface, while others adsorb with one of the O-H bonds parallel to the surface and the other bond pointing towards the bulk liquid phase. There is also evidence of hydrogen bond formation between the first and second solvent layers at the interface. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The study reported here is part of a large project for evaluation of the Thermo-Chemical Accumulator (TCA), a technology under development by the Swedish company ClimateWell AB. The studies concentrate on the use of the technology for comfort cooling. This report concentrates on measurements in the laboratory, modelling and system simulation. The TCA is a three-phase absorption heat pump that stores energy in the form of crystallised salt, in this case Lithium Chloride (LiCl) with water being the other substance. The process requires vacuum conditions as with standard absorption chillers using LiBr/water. Measurements were carried out in the laboratories at the Solar Energy Research Center SERC, at Högskolan Dalarna as well as at ClimateWell AB. The measurements at SERC were performed on a prototype version 7:1 and showed that this prototype had several problems resulting in poor and unreliable performance. The main results were that: there was significant corrosion leading to non-condensable gases that in turn caused very poor performance; unwanted crystallisation caused blockages as well as inconsistent behaviour; poor wetting of the heat exchangers resulted in relatively high temperature drops there. A measured thermal COP for cooling of 0.46 was found, which is significantly lower than the theoretical value. These findings resulted in a thorough redesign for the new prototype, called ClimateWell 10 (CW10), which was tested briefly by the authors at ClimateWell. The data collected here was not large, but enough to show that the machine worked consistently with no noticeable vacuum problems. It was also sufficient for identifying the main parameters in a simulation model developed for the TRNSYS simulation environment, but not enough to verify the model properly. This model was shown to be able to simulate the dynamic as well as static performance of the CW10, and was then used in a series of system simulations. A single system model was developed as the basis of the system simulations, consisting of a CW10 machine, 30 m2 flat plate solar collectors with backup boiler and an office with a design cooling load in Stockholm of 50 W/m2, resulting in a 7.5 kW design load for the 150 m2 floor area. Two base cases were defined based on this: one for Stockholm using a dry cooler with design cooling rate of 30 kW; one for Madrid with a cooling tower with design cooling rate of 34 kW. A number of parametric studies were performed based on these two base cases. These showed that the temperature lift is a limiting factor for cooling for higher ambient temperatures and for charging with fixed temperature source such as district heating. The simulated evacuated tube collector performs only marginally better than a good flat plate collector if considering the gross area, the margin being greater for larger solar fractions. For 30 m2 collector a solar faction of 49% and 67% were achieved for the Stockholm and Madrid base cases respectively. The average annual efficiency of the collector in Stockholm (12%) was much lower than that in Madrid (19%). The thermal COP was simulated to be approximately 0.70, but has not been possible to verify with measured data. The annual electrical COP was shown to be very dependent on the cooling load as a large proportion of electrical use is for components that are permanently on. For the cooling loads studied, the annual electrical COP ranged from 2.2 for a 2000 kWh cooling load to 18.0 for a 21000 kWh cooling load. There is however a potential to reduce the electricity consumption in the machine, which would improve these figures significantly. It was shown that a cooling tower is necessary for the Madrid climate, whereas a dry cooler is sufficient for Stockholm although a cooling tower does improve performance. The simulation study was very shallow and has shown a number of areas that are important to study in more depth. One such area is advanced control strategy, which is necessary to mitigate the weakness of the technology (low temperature lift for cooling) and to optimally use its strength (storage).
Resumo:
One of the first questions to consider when designing a new roll forming line is the number of forming steps required to produce a profile. The number depends on material properties, the cross-section geometry and tolerance requirements, but the tool designer also wants to minimize the number of forming steps in order to reduce the investment costs for the customer. There are several computer aided engineering systems on the market that can assist the tool designing process. These include more or less simple formulas to predict deformation during forming as well as the number of forming steps. In recent years it has also become possible to use finite element analysis for the design of roll forming processes. The objective of the work presented in this thesis was to answer the following question: How should the roll forming process be designed for complex geometries and/or high strength steels? The work approach included both literature studies as well as experimental and modelling work. The experimental part gave direct insight into the process and was also used to develop and validate models of the process. Starting with simple geometries and standard steels the work progressed to more complex profiles of variable depth and width, made of high strength steels. The results obtained are published in seven papers appended to this thesis. In the first study (see paper 1) a finite element model for investigating the roll forming of a U-profile was built. It was used to investigate the effect on longitudinal peak membrane strain and deformation length when yield strength increases, see paper 2 and 3. The simulations showed that the peak strain decreases whereas the deformation length increases when the yield strength increases. The studies described in paper 4 and 5 measured roll load, roll torque, springback and strain history during the U-profile forming process. The measurement results were used to validate the finite element model in paper 1. The results presented in paper 6 shows that the formability of stainless steel (e.g. AISI 301), that in the cold rolled condition has a large martensite fraction, can be substantially increased by heating the bending zone. The heated area will then become austenitic and ductile before the roll forming. Thanks to the phenomenon of strain induced martensite formation, the steel will regain the martensite content and its strength during the subsequent plastic straining. Finally, a new tooling concept for profiles with variable cross-sections is presented in paper 7. The overall conclusions of the present work are that today, it is possible to successfully develop profiles of complex geometries (3D roll forming) in high strength steels and that finite element simulation can be a useful tool in the design of the roll forming process.
Resumo:
In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.
Resumo:
Architecture description languages (ADLs) are used to specify high-level, compositional views of a software application. ADL research focuses on software composed of prefabricated parts, so-called software components. ADLs usually come equipped with rigorous state-transition style semantics, facilitating verification and analysis of specifications. Consequently, ADLs are well suited to configuring distributed and event-based systems. However, additional expressive power is required for the description of enterprise software architectures – in particular, those built upon newer middleware, such as implementations of Java’s EJB specification, or Microsoft’s COM+/.NET. The enterprise requires distributed software solutions that are scalable, business-oriented and mission-critical. We can make progress toward attaining these qualities at various stages of the software development process. In particular, progress at the architectural level can be leveraged through use of an ADL that incorporates trust and dependability analysis. Also, current industry approaches to enterprise development do not address several important architectural design issues. The TrustME ADL is designed to meet these requirements, through combining approaches to software architecture specification with rigorous design-by-contract ideas. In this paper, we focus on several aspects of TrustME that facilitate specification and analysis of middleware-based architectures for trusted enterprise computing systems.
Resumo:
Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.
Resumo:
In the UK, urban river basins are particularly vulnerable to flash floods due to short and intense rainfall. This paper presents potential flood resilience approaches for the highly urbanised Wortley Beck river basin, south west of the Leeds city centre. The reach of Wortley Beck is approximately 6km long with contributing catchment area of 30km2 that drain into the River Aire. Lower Wortley has experienced regular flooding over the last few years from a range of sources, including Wortley Beck and surface and ground water, that affects properties both upstream and downstream of Farnley Lake as well as Wortley Ring Road. This has serious implications for society, the environment and economy activity in the City of Leeds. The first stage of the study involves systematically incorporating Wortley Beck’s land scape features on an Arc-GIS platform to identify existing green features in the region. This process also enables the exploration of potential blue green features: green spaces, green roofs, water retention ponds and swales at appropriate locations and connect them with existing green corridors to maximize their productivity. The next stage is involved in developing a detailed 2D urban flood inundation model for the Wortley Beck region using the CityCat model. CityCat is capable to model the effects of permeable/impermeable ground surfaces and buildings/roofs to generate flood depth and velocity maps at 1m caused by design storm events. The final stage of the study is involved in simulation of range of rainfall and flood event scenarios through CityCat model with different blue green features. Installation of other hard engineering individual property protection measures through water butts and flood walls are also incorporated in the CityCat model. This enables an integrated sustainable flood resilience strategy for this region.
Resumo:
An underwater gas pipeline is the portion of the pipeline that crosses a river beneath its bottom. Underwater gas pipelines are subject to increasing dangers as time goes by. An accident at an underwater gas pipeline can lead to technological and environmental disaster on the scale of an entire region. Therefore, timely troubleshooting of all underwater gas pipelines in order to prevent any potential accidents will remain a pressing task for the industry. The most important aspect of resolving this challenge is the quality of the automated system in question. Now the industry doesn't have any automated system that fully meets the needs of the experts working in the field maintaining underwater gas pipelines. Principle Aim of this Research: This work aims to develop a new system of automated monitoring which would simplify the process of evaluating the technical condition and decision making on planning and preventive maintenance and repair work on the underwater gas pipeline. Objectives: Creation a shared model for a new, automated system via IDEF3; Development of a new database system which would store all information about underwater gas pipelines; Development a new application that works with database servers, and provides an explanation of the results obtained from the server; Calculation of the values MTBF for specified pipelines based on quantitative data obtained from tests of this system. Conclusion: The new, automated system PodvodGazExpert has been developed for timely and qualitative determination of the physical conditions of underwater gas pipeline; The basis of the mathematical analysis of this new, automated system uses principal component analysis method; The process of determining the physical condition of an underwater gas pipeline with this new, automated system increases the MTBF by a factor of 8.18 above the existing system used today in the industry.
Resumo:
This study will collaborate by bringing some detailed analysis and findings on a special case study of a discontinuous product development process, trying to answer how the discontinuous product development process takes place and the main factors that influence this process. Additionally, it tried to explore some explanations for the difficulties generally faced by the companies to sustain innovation. The case is about the Motorola cell phone RAZR V3, launched in 2004. RAZR V3 was noted by industry experts as game-changing feat of design and engineering, selling more than 110 million units by end of 2008 and recognized as one of the fastest selling products in the industry. The study uses a single case methodology, which is appropriate given the access to a phenomenon that happened inside corporate dominium and it is not easily accessed for academic studies, besides being a rare case of success in the cellular phone industry. In order to magnify the understanding of the phenomenon, the exploration was extended to contrast the RAZR development process and the standard product development process in Motorola. Additionally, it was integrated a longitudinal reflection of the company product development evolution until the next breakthrough product hitting the cellular phone industry. The result of the analysis shows that discontinuous products do not fit well traditional product development process (in this case, stage-gate). This result reinforces the results obtained on previous studies of discontinuous product development conducted by other authors. Therefore, it is clear that the dynamics of discontinuous product development are different from the continuous product development, requiring different treatment to succeed. Moreover, this study highlighted the importance of the management influence in all the phases of the process as one of the most important factors, suggesting a key component to be carefully observed in future researches. Some other findings of the study that were considered very important for a discontinuous product development process: have champions (who believe and protect the project) and not only one champion; create a right atmosphere to make flow the creative process; question paradigms to create discontinuous products; simple guiding light to focus the team; company culture that accepts and knows how to deal with risks; and undoubtedly, have a company strategy that understands the different dynamics of continuous and discontinuous product development processes and treat them accordingly.
Resumo:
This thesis presents a JML-based strategy that incorporates formal specifications into the software development process of object-oriented programs. The strategy evolves functional requirements into a “semi-formal” requirements form, and then expressing them as JML formal specifications. The strategy is implemented as a formal-specification pseudo-phase that runs in parallel with the other phase of software development. What makes our strategy different from other software development strategies used in literature is the particular use of JML specifications we make all along the way from requirements to validation-and-verification.