883 resultados para Minimization of open stack problem
Resumo:
This paper argues a model of open system design for sustainable architecture, based on a thermodynamics framework of entropy as an evolutionary paradigm. The framework can be simplified to stating that an open system evolves in a non-linear pattern from a far-from-equilibrium state towards a non-equilibrium state of entropy balance, which is a highly ordered organization of the system when order comes out of chaos. This paper is work in progress on a PhD research project which aims to propose building information modelling for optimization and adaptation of buildings environmental performance as an alternative sustainable design program in architecture. It will be used for efficient distribution and consumption of energy and material resource in life-cycle buildings, with the active involvement of the end-users and the physical constraints of the natural environment.
Resumo:
To obtain minimum time or minimum energy trajectories for robots it is necessary to employ planning methods which adequately consider the platform’s dynamic properties. A variety of sampling, graph-based or local receding-horizon optimisation methods have previously been proposed. These typically use simplified kino-dynamic models to avoid the significant computational burden of solving this problem in a high dimensional state-space. In this paper we investigate solutions from the class of pseudospectral optimisation methods which have grown in favour amongst the optimal control community in recent years. These methods have high computational efficiency and rapid convergence properties. We present a practical application of such an approach to the robot path planning problem to provide a trajectory considering the robot’s dynamic properties. We extend the existing literature by augmenting the path constraints with sensed obstacles rather than predefined analytical functions to enable real world application.
Resumo:
The Saffman-Taylor finger problem is to predict the shape and,in particular, width of a finger of fluid travelling in a Hele-Shaw cell filled with a different, more viscous fluid. In experiments the width is dependent on the speed of propagation of the finger, tending to half the total cell width as the speed increases. To predict this result mathematically, nonlinear effects on the fluid interface must be considered; usually surface tension is included for this purpose. This makes the mathematical problem suffciently diffcult that asymptotic or numerical methods must be used. In this paper we adapt numerical methods used to solve the Saffman-Taylor finger problem with surface tension to instead include the effect of kinetic undercooling, a regularisation effect important in Stefan melting-freezing problems, for which Hele-Shaw flow serves as a leading order approximation when the specific heat of a substance is much smaller than its latent heat. We find the existence of a solution branch where the finger width tends to zero as the propagation speed increases, disagreeing with some aspects of the asymptotic analysis of the same problem. We also find a second solution branch, supporting the idea of a countably infinite number of branches as with the surface tension problem.
Resumo:
Cognitive load theory was used to generate a series of three experiments to investigate the effects of various worked example formats on learning orthographic projection. Experiments 1 and 2 investigated the benefits of presenting problems, conventional worked examples incorporating the final 2-D and 3-D representations only, and modified worked examples with several intermediate stages of rotation between the 2-D and 3-D representations. Modified worked examples proved superior to conventional worked examples without intermediate stages while conventional worked examples were, in turn, superior to problems. Experiment 3 investigated the consequences of varying the number and location of intermediate stages in the rotation trajectory and found three stages to be superior to one. A single intermediate stage was superior when nearer the 2-D than the 3-D end of the trajectory. It was concluded that (a) orthographic projection is learned best using worked examples with several intermediate stages and that (b) a linear relation between angle of rotation and problem difficulty did not hold for orthographic projection material. Cognitive load theory could be used to suggest the ideal location of the intermediate stages.
Resumo:
This paper details the development of, and perceived role and effectiveness of an innovative intervention designed to ultimately improve the safety of a group of community care (CC) nurses while driving. Recruiting participants from an Australian CC nursing car fleet, qualitative responses from a series of open-ended questions were obtained from drivers (n = 36), supervisors (n = 22), and managers (n = 6). The findings supported the effectiveness of the intervention in reducing self-reported speeding and promoting greater insight into one’s behaviour on the road. This research has important practical implications in that it highlights the value of developing an intervention based on a sound theoretical framework and which is aligned with the needs and beliefs of personnel within a particular organisation.
Resumo:
Realisation of the importance of real estate asset strategic decision making has inspired a burgeoning corporate real estate management (CREM) literature. Much of this criticises the poor alignment between strategic business direction and the ‘enabling’ physical environment. This is based on the understanding that corporate real estate assets represent the physical resource base that supports business, and can either complement or impede that business. In the hope of resolving this problem, CRE authors advocate a deeper integration of strategic and corporate real estate decisions. However this recommendation appears to be based on a relatively simplistic theoretical approach to organization where decision-making tends to be viewed as a rationally managed event rather than a complex process. Defining decision making as an isolated event has led to an uncritical acceptance of two basic assumptions: ubiquitous, conflict-free rationality and profit maximisation. These assumptions have encouraged prescriptive solutions that clearly lack the sophistication necessary to come to grips with the complexity of the built and organizational environment. Alternatively, approaching CREM decision making from a more sophisticated perspective, such as that of the “Carnegie School”, leads to conceptualise it as a ‘process’, creating room for bounded rationality, multiple goals, intra-organizational conflict, environmental matching, uncertainty avoidance and problem searching. It is reasonable to expect that such an approach will result in a better understanding of the organizational context, which will facilitate the creation of organizational objectives, assist with the formation of strategies, and ultimately will aid decision.
Resumo:
Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. The properties of the jet were examined far enough downstream to be relevant to the eventual modelling of the mixing problem. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm in a weak co-flow of 0.04 m/s. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller, which was placed in a glass-walled flume 0.4 m wide with a free surface depth of 0.15 m. The jet and scalar plume development were compared to that of a classical free round jet. Further, results pertaining to radial distribution, self similarity, standard deviation growth, maximum value decay and integral fluxes of velocity and concentration were presented and fitted with empirical correlations. Furthermore, propeller induced mixing and pollutant source concentration from a two-stroke engine were estimated.
Resumo:
A model for drug diffusion from a spherical polymeric drug delivery device is considered. The model contains two key features. The first is that solvent diffuses into the polymer, which then transitions from a glassy to a rubbery state. The interface between the two states of polymer is modelled as a moving boundary, whose speed is governed by a kinetic law; the same moving boundary problem arises in the one-phase limit of a Stefan problem with kinetic undercooling. The second feature is that drug diffuses only through the rubbery region, with a nonlinear diffusion coefficient that depends on the concentration of solvent. We analyse the model using both formal asymptotics and numerical computation, the latter by applying a front-fixing scheme with a finite volume method. Previous results are extended and comparisons are made with linear models that work well under certain parameter regimes. Finally, a model for a multi-layered drug delivery device is suggested, which allows for more flexible control of drug release.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.
Resumo:
Binary classification is a well studied special case of the classification problem. Statistical properties of binary classifiers, such as consistency, have been investigated in a variety of settings. Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that one can lose consistency in generalizing a binary classification method to deal with multiple classes. We study a rich family of multiclass methods and provide a necessary and sufficient condition for their consistency. We illustrate our approach by applying it to some multiclass methods proposed in the literature.
Resumo:
Most learning paradigms impose a particular syntax on the class of concepts to be learned; the chosen syntax can dramatically affect whether the class is learnable or not. For classification paradigms, where the task is to determine whether the underlying world does or does not have a particular property, how that property is represented has no implication on the power of a classifier that just outputs 1’s or 0’s. But is it possible to give a canonical syntactic representation of the class of concepts that are classifiable according to the particular criteria of a given paradigm? We provide a positive answer to this question for classification in the limit paradigms in a logical setting, with ordinal mind change bounds as a measure of complexity. The syntactic characterization that emerges enables to derive that if a possibly noncomputable classifier can perform the task assigned to it by the paradigm, then a computable classifier can also perform the same task. The syntactic characterization is strongly related to the difference hierarchy over the class of open sets of some topological space; this space is naturally defined from the class of possible worlds and possible data of the learning paradigm.
Resumo:
Open-source software systems have become a viable alternative to proprietary systems. We collected data on the usage of an open-source workflow management system developed by a university research group, and examined this data with a focus on how three different user cohorts – students, academics and industry professionals – develop behavioral intentions to use the system. Building upon a framework of motivational components, we examined the group differences in extrinsic versus intrinsic motivations on continued usage intentions. Our study provides a detailed understanding of the use of open-source workflow management systems in different user communities. Moreover, it discusses implications for the provision of workflow management systems, the user-specific management of open-source systems and the development of services in the wider user community.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.
Resumo:
A better understanding of Open Source Innovation in Physical Product (OSIP) might allow project managers to mitigate risks associated with this innovation model and process, while developing the right strategies to maximise OSIP outputs. In the software industry, firms have been highly successful using Open Source Innovation (OSI) strategies. However, OSI in the physical world has not been studied leading to the research question: What advantages and disadvantages do organisations incur from using OSI in physical products? An exploratory research methodology supported by thirteen semi-structured interviews helped us build a seven-theme framework to categorise advantages and disadvantages elements linked with the use of OSIP. In addition, factors impacting advantage and disadvantage elements for firms using OSIP were identified as: „h Degree of openness in OSIP projects; „h Time of release of OSIP in the public domain; „h Use of Open Source Innovation in Software (OSIS) in conjunction with OSIP; „h Project management elements (Project oversight, scope and modularity); „h Firms. Corporate Social Responsibility (CSR) values; „h Value of the OSIP project to the community. This thesis makes a contribution to the body of innovation theory by identifying advantages and disadvantages elements of OSIP. Then, from a contingency perspective it identifies factors which enhance or decrease advantages, or mitigate/ or increase disadvantages of OSIP. In the end, the research clarifies the understanding of OSI by clearly setting OSIP apart from OSIS. The main practical contribution of this paper is to provide manager with a framework to better understand OSIP as well as providing a model, which identifies contingency factors increasing advantage and decreasing disadvantage. Overall, the research allows managers to make informed decisions about when they can use OSIP and how they can develop strategies to make OSIP a viable proposition. In addition, this paper demonstrates that advantages identified in OSIS cannot all be transferred to OSIP, thus OSIP decisions should not be based upon OSIS knowledge.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.