891 resultados para Design methods
Resumo:
Tapered tubular steel masts are commonly used to support floodlights in a range of applications. The design of these slender tapered masts requires a rational elastic flexural buckling analysis as the thickness also varies with height. Therefore a series of finite element analyses of tapered masts with varying geometry parameters was conducted to develop an elastic flexural buckling load formula. This paper briefly discusses the design methods, and then presents the details of the finite element analyses and the results. 1–Associate Professor of Civil Engineering, and Director, Physical Infrastructure Centre 2–Former BE (Civil) Student, QUT
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models
Resumo:
This paper proposes and explores the Deep Customer Insight Innovation Framework in order to develop an understanding as to how design can be integrated within existing innovation processes. The Deep Customer Insight Innovation Framework synthesises the work of Beckman and Barry (2007) as a theoretical foundation, with the framework explored within a case study of Australian Airport Corporation seeking to drive airport innovations in operations and retail performance. The integration of a deep customer insight approach develops customer-centric and highly integrated solutions as a function of concentrated problem exploration and design-led idea generation. Businesses’ facing complex innovation challenges or seeking to making sense of future opportunities will be able to integrate design into existing innovation processes, anchoring the new approach between existing market research and business development activities. This paper contributes a framework and novel understanding as to how design methods are integrated into existing innovation processes for operationalization within industry.
Connecting the space between design and research: Explorations in participatory research supervision
Resumo:
In this article we offer a single case study using an action research method for gathering and analysing data offering insights valuable to both design and research supervision practice. We do not attempt to generalise from this single case, but offer it as an instance that can improve our understanding of research supervision practice. We question the conventional ‘dyadic’ models of research supervision and outline a more collaborative model, based on the signature pedagogy of architecture: the design studio. A novel approach to the supervision of creatively oriented post-graduate students is proposed, including new approaches to design methods and participatory supervision that draw on established design studio practices. This model collapses the distance between design and research activities. Our case study involving Research Masters student supervision in the discipline of Architecture, shows how ‘connected learning’ emerges from this approach. This type of learning builds strong elements of creativity and fun, which promote and enhance student engagement. The results of our action research suggests that students learn to research more easily in such an environment and supervisory practices are enhanced when we apply the techniques and characteristics of design studio pedagogy to the more conventional research pedagogies imported from the humanities. We believe that other creative disciplines can apply similar tactics to enrich both the creative practice of research and the supervision of HDR students.
Resumo:
Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.
Resumo:
The methods of design available for geocell-supported embankments are very few. Two of the earlier methods are considered in this paper and a third method is proposed and compared with them. The first method is the slip line method proposed by earlier researchers. The second method is based on slope stability analysis proposed by this author earlier and the new method proposed is based on the finite element analyses. In the first method, plastic bearing failure of the soil was assumed and the additional resistance due to geocell layer is calculated using a non-symmetric slip line field in the soft foundation soil. In the second method, generalpurpose slope stability program was used to design the geocell mattress of required strength for embankment using a composite model to represent the shear strength of geocell layer. In the third method proposed in this paper, geocell reinforcement is designed based on the plane strain finite element analysis of embankments. The geocell layer is modelled as an equivalent composite layer with modified strength and stiffness values. The strength and dimensions of geocell layer is estimated for the required bearing capacity or permissible deformations. These three design methods are compared through a design example.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Current design codes for floating offshore structures are based on measures of short-term reliability. That is, a design storm is selected via an extreme value analysis of the environmental conditions and the reliability of the vessel in that design storm is computed. Although this approach yields valuable information on the vessel motions, it does not produce a statistically rigorous assessment of the lifetime probability of failure. An alternative approach is to perform a long-term reliability analysis in which consideration is taken of all sea states potentially encountered by the vessel during the design life. Although permitted as a design approach in current design codes, the associated computational expense generally prevents its use in practice. A new efficient approach to long-term reliability analysis is presented here, the results of which are compared with a traditional short-term analysis for the surge motion of a representative moored FPSO in head seas. This serves to illustrate the failure probabilities actually embedded within current design code methods, and the way in which design methods might be adapted to achieve a specified target safety level.
Resumo:
Soil liquefaction continues to be a major source of damage to buildings and infrastructure after major earthquake events. Ground improvement methods are widely used at many sites worldwide as a way of mitigating liquefaction damage. The relative success of these ground improvement methods in preventing damage after a liquefaction event and the mechanisms by which they can mitigate liquefaction continue to be areas of active research. In this paper the emphasis is on the use of dynamic centrifuge modelling as a tool to investigate the effectiveness of ground improvement methods in mitigating liquefaction risk. Three different ground improvement methods will be considered. First, the effectiveness of in situ densification as a liquefaction resistance measure will be investigated. It will be shown that the mechanism by which soil densification offers mitigation of the liquefaction risk can be studied at a fundamental level using dynamic centrifuge modelling. Second, the use of drains to relieve excess pore pressures generated during an earthquake event will be considered. It will be shown that current design methods can be further improved by incorporating the understanding obtained from dynamic centrifuge tests. Finally, the use of soil grouting to mitigate liquefaction risk will be investigated. It will be shown that by grouting the foundation soil, the settlement of a building can be reduced following earthquake loading. However, the grouting depth must extend the whole depth of the liquefiable layer to achieve this reduction in settlements.
Resumo:
A host of methods and tools to support designing are being developed in Cambridge EDC. These range from tools for design management to those for the generation and selection of design ideas, layouts, materials and production processes. A project, to develop a device to improve arm mobility of muscular dystrophy sufferers, is undertaken as a test-bed to evaluate and improve these methods and tools as well as to observe and modify its design and management processes. This paper presents the difficulties and advantages of using design methods and tools within this rehabilitation design context, with special focus on the evolution of the designs, tools, and management processes.
Resumo:
This paper proposes a magnetic circuit model (MCM) for the design of a brushless doubly-fed machine (BDFM). The BDFM possesses advantages in terms of high reliability and reduced gearbox stages, and it requires a fractionally-rated power converter. This makes it suitable for utilization in offshore wind turbines. It is difficult for conventional design methods to calculate the flux in the stator because the two sets of stator windings, which have different pole number, form a complex flux pattern which is not easily determined using common analytical approaches. However, it is advantageous to predict the flux density in the teeth and air-gap at the initial design stage for sizing purposes without recourse finite element analysis. Therefore a magnetic circuit model is developed in this paper to calculate the flux density. A BDFM is used as a case study with FEA validation. © 1965-2012 IEEE.
Resumo:
Most traditional satellite constellation design methods are associated with a simple zonal or global, continuous or discontinuous coverage connected with a visibility of points on the Earth's surface. A new geometric approach for more complex coverage of a geographic region is proposed. Full and partial coverage of regions is considered. It implies that, at any time, the region is completely or partially within the instantaneous access area of a satellite of the constellation. The key idea of the method is a two-dimensional space application for maps of the satellite constellation and coverage requirements. The space dimensions are right ascension of ascending node and argument of latitude. Visibility requirements of each region can be presented as a polygon and satellite constellation as a uniform moving grid. At any time, at least one grid vertex must belong to the polygon. The optimal configuration of the satellite constellation corresponds to the maximum sparse grid. The method is suitable for continuous and discontinuous coverage. In the last case, a vertex belonging to the polygon should be examined with a revisit time. Examples of continuous coverage for a space communication network and of the United States are considered. Examples of discontinuous coverage are also presented.
Resumo:
Two complementary wireless sensor nodes for building two-tiered heterogeneous networks are presented. A larger node with a 25 mm by 25 mm size acts as the backbone of the network, and can handle complex data processing. A smaller, cheaper node with a 10 mm by 10 mm size can perform simpler sensor-interfacing tasks. The 25mm node is based on previous work that has been done in the Tyndall National Institute that created a modular wireless sensor node. In this work, a new 25mm module is developed operating in the 433/868 MHz frequency bands, with a range of 3.8 km. The 10mm node is highly miniaturised, while retaining a high level of modularity. It has been designed to support very energy efficient operation for applications with low duty cycles, with a sleep current of 3.3 μA. Both nodes use commercially available components and have low manufacturing costs to allow the construction of large networks. In addition, interface boards for communicating with nodes have been developed for both the 25mm and 10mm nodes. These interface boards provide a USB connection, and support recharging of a Li-ion battery from the USB power supply. This paper discusses the design goals, the design methods, and the resulting implementation.