874 resultados para Process parameters
Resumo:
Quality has been an important factor for shopping centers in competitive conditions. However, quality measurement has no standard. In Surabaya, only two regional shopping centers will be measured in this research. The objective is assessing quality of shopping centers building using Analytical Hierarchy Process (AHP) method and calculating the Building Quality Index. An overall ranking of Hierarchy priorities of quality criteria founded as a result from AHP analysis. Access and Circulation became the highest priority in affecting quality of shopping centers building according to respondents’ perception of quality. Weightened value as a result from comparison between two shopping centers as follows: Tunjungan Plaza get 0,732 point and Surabaya Plaza get 0,268 point. The first shopping center got higher weight than the second shopping center. The BQI for Tunjungan Plaza is 66% and for Surabaya Plaza is 64%.
Resumo:
An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.
Resumo:
Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.
Resumo:
Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.
Resumo:
Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.
Resumo:
This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the ‘Melting Rate Triangle’, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an ‘Extended Fire Triangle’ as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a ‘Melting Rate Triangle’ provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.
Resumo:
A configurable process model describes a family of similar process models in a given domain. Such a model can be configured to obtain a specific process model that is subsequently used to handle individual cases, for instance, to process customer orders. Process configuration is notoriously difficult as there may be all kinds of interdependencies between configuration decisions.} In fact, an incorrect configuration may lead to behavioral issues such as deadlocks and livelocks. To address this problem, we present a novel verification approach inspired by the ``operating guidelines'' used for partner synthesis. We view the configuration process as an external service, and compute a characterization of all such services which meet particular requirements using the notion of configuration guideline. As a result, we can characterize all feasible configurations (i.\,e., configurations without behavioral problems) at design time, instead of repeatedly checking each individual configuration while configuring a process model.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.
Resumo:
The literature identifies several models that describe inter-phase mass transfer, key to the emission process. While the emission process is complex and these models may be more or less successful at predicting mass transfer rates, they identify three key variables for a system involving a liquid and an air phase in contact with it: • A concentration (or partial pressure) gradient driving force; • The fluid dynamic characteristics within the liquid and air phases, and • The chemical properties of the individual components within the system. In three applied research projects conducted prior to this study, samples collected with two well-known sampling devices resulted in very different odour emission rates. It was not possible to adequately explain the differences observed. It appeared likely, however, that the sample collection device might have artefact effects on the emission of odorants, i.e. the sampling device appeared to have altered the mass transfer process. This raised the obvious question: Where two different emission rates are reported for a single source (differing only in the selection of sampling device), and a credible explanation for the difference in emission rate cannot be provided, which emission rate is correct? This research project aimed to identify the factors that determine odour emission rates, the impact that the characteristics of a sampling device may exert on the key mass transfer variables, and ultimately, the impact of the sampling device on the emission rate itself. To meet these objectives, a series of targeted reviews, and laboratory and field investigations, were conducted. Two widely-used, representative devices were chosen to investigate the influence of various parameters on the emission process. These investigations provided insight into the odour emission process generally, and the influence of the sampling device specifically.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
Business Process Modelling is a fast growing field in business and information technology, which uses visual grammars to model and execute the processes within an organisation. However, many analysts present such models in a 2D static and iconic manner that is difficult to understand by many stakeholders. Difficulties in understanding such grammars can impede the improvement of processes within an enterprise due to communication problems. In this chapter we present a novel framework for intuitively visualising animated business process models in interactive Virtual Environments. We also show that virtual environment visualisations can be performed with present 2D business process modelling technology, thus providing a low barrier to entry for business process practitioners. Two case studies are presented from film production and healthcare domains that illustrate the ease with which these visualisations can be created. This approach can be generalised to other executable workflow systems, for any application domain being modelled.
Resumo:
Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.