915 resultados para conservative scenario
Resumo:
We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot.
Resumo:
We study a political economy model which aims to understand the diversity in the growth and technology-adoption experiences in different economies. In this model the cost of technology adoption is endogenous and varies across heterogeneous agents. Agents in the model vote on the proportion of revenues allocated towards such expenditures. In the early stages of development, the political-economy outcome of the model ensures that a sub-optimal proportion of government revenue is used to finance adoption-cost reducing expenditures. This sub-optimality is due to the presence of inequality; agents at the lower end of the distribution favor a larger amount of revenue allocated towards redistribution in the form of lump-sum transfers. Eventually all individuals make the switch to the better technology and their incomes converge. The outcomes of the model therefore explain why public choice is more likely to be conservative in nature; it represents the majority choice given conflicting preferences among agents. Consequently, the transition path towards growth and technology adoption varies across countries depending on initial levels of inequality.
Resumo:
Language has been of interest to numerous economists since the late 20th century, with the majority of the studies focusing on its effects on immigrants’ labour market outcomes; earnings in particular. However, language is an endogenous variable, which along with its susceptibility to measurement error causes biases in ordinary-least-squares estimates. The instrumental variables method overcomes the shortcomings of ordinary least squares in modelling endogenous explanatory variables. In this dissertation, age at arrival combined with country of origin form an instrument creating a difference-in-difference scenario, to address the issue of endogeneity and attenuation error in language proficiency. The first half of the study aims to investigate the extent to which English speaking ability of immigrants improves their labour market outcomes and social assimilation in Australia, with the use of the 2006 Census. The findings have provided evidence that support the earlier studies. As expected, immigrants in Australia with better language proficiency are able to earn higher income, attain higher level of education, have higher probability of completing tertiary studies, and have more hours of work per week. Language proficiency also improves social integration, leading to higher probability of marriage to a native and higher probability of obtaining citizenship. The second half of the study further investigates whether language proficiency has similar effects on a migrant’s physical and mental wellbeing, health care access and lifestyle choices, with the use of three National Health Surveys. However, only limited evidence has been found with respect to the hypothesised causal relationship between language and health for Australian immigrants.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
This study seeks insights into the economic consequences of accounting conservatism by examining the relation between conservatism and cost of equity capital. Appealing to the analytical and empirical literatures, we posit an inverse relation. Importantly, we also posit that the strength of the relation is conditional on the firm’s information environment, being the strongest for firms with high information asymmetry and the weakest (potentially negligible) for firms with low information asymmetry. Based on a sample of US-listed entities, we find, as predicted, an inverse relation between conservatism and the cost of equity capital, but further, that this relation is diminished for firms with low information asymmetry environments. This evidence indicates that there are economic benefits associated with the adoption of conservative reporting practices and leads us to conclude that conservatism has a positive role in accounting principles and practices, despite its increasing rejection by accounting standard setters.
Resumo:
This paper presents the details of experimental studies on the shear behaviour and strength of lipped channel beams (LCBs). The LCB sections are commonly used as flexural members in residential, industrial and commercial buildings. To ensure safe and efficient designs of LCBs, many research studies have been undertaken on the flexural behaviour of LCBs. To date, however, limited research has been conducted into the strength of LCB sections subject to shear actions. Therefore a detailed experimental study involving 20 tests was undertaken to investigate the shear behaviour and strength of LCBs. This research has shown the presence of increased shear capacity of LCBs due to the additional fixity along the web to flange juncture, but the current design rules (AS/NZS 4600 and AISI) ignore this effect and were thus found to be conservative. Therefore they were modified by including a higher elastic shear buckling coefficient. Ultimate shear capacity results obtained from the shear tests were compared with the modified shear capacity design rules. It was found that they are still conservative as they ignore the presence of post-buckling strength. Hence the AS/NZS 4600 and AISI design rules were further modified to include the available post-buckling strength. Suitable design rules were also developed under the direct strength method (DSM) format. This paper presents the details of this study and the results including the modified shear design rules.
Resumo:
Current design rules for determining the member strength of cold-formed steel columns are based on the effective length of the member and a single column capacity curve for both pin-ended and fixed-ended columns. This research has reviewed the use of AS/NZS 4600 design rules for their accuracy in determining the member compression capacities of slender cold-formed steel columns using detailed numerical studies. It has shown that AS/NZS 4600 design rules accurately predicted the capacities of pinned and fixed ended columns undergoing flexural buckling. However, for fixed ended columns undergoing flexural-torsional buckling, it was found that current AS/NZS 4600 design rules did not include the beneficial effect of warping fixity. Therefore AS/NZS 4600 design rules were found to be excessively conservative and hence uneconomical in predicting the failure loads obtained from tests and finite element analyses of fixed-ended lipped channel columns. Based on this finding, suitable recommendations have been made to modify the current AS/NZS 4600 design rules to more accurately reflect the results obtained from the numerical and experimental studies conducted in this research. This paper presents the details of this research on cold-formed steel columns and the results.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Background: Many people will consult a medical practitioner about lower bowel symptoms, and the demand for access to general practitioners (GPs) is growing. We do not know if people recognise the symptoms of lower bowel cancer when advising others about the need to consult a doctor. A structured vignette survey was conducted in Western Australia. Method: Participants were recruited from the waiting rooms at five general practices. Respondents were invited to complete self-administered questionnaires containing nine vignettes chosen at random from a pool of 64 based on six clinical variables. Twenty-seven vignettes described high-risk bowel cancer scenarios. Respondents were asked if they would recommend a medical consultation for the case described and whether they believed the scenario was a cancer presentation. Logistic regression was used to estimate the independent effects of each variable on the respondent's judgement. Two-hundred and sixty-eight completed responses were collected over eight weeks. Results: The majority (61%) of respondents were female, aged 40 years and older. A history of rectal bleeding, six weeks of symptoms, and weight loss independently increased the odds of recommending a consultation with a medical practitioner by a factor of 7.64, 4.11 and 1.86, respectively. Most cases that were identified as cancer (75.2%) would not be classified as such on current research evidence. Factors that predict recognition of cancer presentations include rectal bleeding, weight loss and diarrhoea.
Resumo:
Cold-formed steel members are increasingly used as primary structural elements in the building industries around the world due to the availability of thin and high strength steels and advanced cold-forming technologies. Cold-formed lipped channel beams (LCB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. Current practice in flooring systems is to include openings in the web element of floor joists or bearers so that building services can be located within them. Shear behaviour of LCBs with web openings is more complicated while their shear strengths are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed experimental study involving 40 shear tests was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Simply supported test specimens of LCBs with aspect ratios of 1.0 and 1.5 were loaded at midspan until failure. This paper presents the details of this experimental study and the results of their shear capacities and behavioural characteristics. Experimental results showed that the current design rules in cold-formed steel structures design codes are very conservative for the shear design of LCBs with web openings. Improved design equations have been proposed for the shear strength of LCBs with web openings based on the experimental results from this study.
Resumo:
Cold-formed steel Lipped Channel Beams (LCB) with web openings are commonly used as floor joists and bearers in building structures. The shear behaviour of these beams is more complicated and their shear capacities are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed numerical study was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Finite element models of simply supported LCBs under a mid-span load with aspect ratios of 1.0 and 1.5 were developed and validated by comparing their results with test results. They were then used in a detailed parametric study to investigate the effects of various influential parameters. Experimental and numerical results showed that the current design rules in cold-formed steel structures design codes are very conservative. Improved design equations were therefore proposed for the shear strength of LCBs with web openings based on both experimental and numerical results. This paper presents the details of finite element modelling of LCBs with web openings, validation of finite element models, and the development of improved shear design rules. The proposed shear design rules in this paper can be considered for inclusion in the future versions of cold-formed steel design codes.
Resumo:
Recent advances in the area of ‘Transformational Government’ position the citizen at the centre of focus. This paradigm shift from a department-centric to a citizen-centric focus requires governments to re-think their approach to service delivery, thereby decreasing costs and increasing citizen satisfaction. The introduction of franchises as a virtual business layer between the departments and their citizens is intended to provide a solution. Franchises are structured to address the needs of citizens independent of internal departmental structures. For delivering services online, governments pursue the development of a One-Stop Portal, which structures information and services through those franchises. Thus, each franchise can be mapped to a specific service bundle, which groups together services that are deemed to be of relevance to a specific citizen need. This study focuses on the development and evaluation of these service bundles. In particular, two research questions guide the line of investigation of this study: Research Question 1): What methods can be used by governments to identify service bundles as part of governmental One-Stop Portals? Research Question 2): How can the quality of service bundles in governmental One-Stop Portals be evaluated? The first research question asks about the identification of suitable service bundle identification methods. A literature review was conducted, to, initially, conceptualise the service bundling task, in general. As a consequence, a 4-layer model of service bundling and a morphological box were created, detailing characteristics that are of relevance when identifying service bundles. Furthermore, a literature review of Decision-Support Systems was conducted to identify approaches of relevance in different bundling scenarios. These initial findings were complemented by targeted studies of multiple leading governments in the e-government domain, as well as with a local expert in the field. Here, the aim was to identify the current status of online service delivery and service bundling in practice. These findings led to the conceptualising of two service bundle identification methods, applicable in the context of Queensland Government: On the one hand, a provider-driven approach, based on service description languages, attributes, and relationships between services was conceptualised. As well, a citizen-driven approach, based on analysing the outcomes from content identification and grouping workshops with citizens, was also conceptualised. Both methods were then applied and evaluated in practice. The conceptualisation of the provider-driven method for service bundling required the initial specification of relevant attributes that could be used to identify similarities between services called relationships; these relationships then formed the basis for the identification of service bundles. This study conceptualised and defined seven relationships, namely ‘Co-location’, ‘Resource’, ‘Co-occurrence’, ‘Event’, ‘Consumer’, ‘Provider’, and ‘Type’. The relationships, and the bundling method itself, were applied and refined as part of six Action Research cycles in collaboration with the Queensland Government. The findings show that attributes and relationships can be used effectively as a means for bundle identification, if distinct decision rules are in place to prescribe how services are to be identified. For the conceptualisation of the citizen-driven method, insights from the case studies led to the decision to involve citizens, through card sorting activities. Based on an initial list of services, relevant for a certain franchise, participating citizens grouped services according to their liking. The card sorting activity, as well as the required analysis and aggregation of the individual card sorting results, was analysed in depth as part of this study. A framework was developed that can be used as a decision-support tool to assist with the decision of what card sorting analysis method should be utilised in a given scenario. The characteristic features associated with card sorting in a government context led to the decision to utilise statistical analysis approaches, such as cluster analysis and factor analysis, to aggregate card sorting results. The second research question asks how the quality of service bundles can be assessed. An extensive literature review was conducted focussing on bundle, portal, and e-service quality. It was found that different studies use different constructs, terminology, and units of analysis, which makes comparing these models a difficult task. As a direct result, a framework was conceptualised, that can be used to position past and future studies in this research domain. Complementing the literature review, interviews conducted as part of the case studies with leaders in e-government, indicated that, typically, satisfaction is evaluated for the overall portal once the portal is online, but quality tests are not conducted during the development phase. Consequently, a research model which appropriately defines perceived service bundle quality would need to be developed from scratch. Based on existing theory, such as Theory of Reasoned Action, Expectation Confirmation Theory, and Theory of Affordances, perceived service bundle quality was defined as an inferential belief. Perceived service bundle quality was positioned within the nomological net of services. Based on the literature analysis on quality, and on the subsequent work of a focus group, the hypothesised antecedents (descriptive beliefs) of the construct and the associated question items were defined and the research model conceptualised. The model was then tested, refined, and finally validated during six Action Research cycles. Results show no significant difference in higher quality or higher satisfaction among users for either the provider-driven method or for the citizen-driven method. The decision on which method to choose, it was found, should be based on contextual factors, such as objectives, resources, and the need for visibility. The constructs of the bundle quality model were examined. While the quality of bundles identified through the citizen-centric approach could be explained through the constructs ‘Navigation’, ‘Ease of Understanding’, and ‘Organisation’, bundles identified through the provider-driven approach could be explained solely through the constructs ‘Navigation’ and ‘Ease of Understanding’. An active labelling style for bundles, as part of the provider-driven Information Architecture, had a larger impact on ‘Quality’ than the topical labelling style used in the citizen-centric Information Architecture. However, ‘Organisation’, reflecting the internal, logical structure of the Information Architecture, was a significant factor impacting on ‘Quality’ only in the citizen-driven Information Architecture. Hence, it was concluded that active labelling can compensate for a lack of logical structure. Further studies are needed to further test this conjecture. Such studies may involve building alternative models and conducting additional empirical research (e.g. use of an active labelling style for the citizen-driven Information Architecture). This thesis contributes to the body of knowledge in several ways. Firstly, it presents an empirically validated model of the factors explaining and predicting a citizen’s perception of service bundle quality. Secondly, it provides two alternative methods that can be used by governments to identify service bundles in structuring the content of a One-Stop Portal. Thirdly, this thesis provides a detailed narrative to suggest how the recent paradigm shift in the public domain, towards a citizen-centric focus, can be pursued by governments; the research methodology followed by this study can serve as an exemplar for governments seeking to achieve a citizen-centric approach to service delivery.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Conventionally, design has played a compartmental role in the innovation process within most conservative companies around the world. Generally, companies have focused on the product design execution or the manufacturing and production arenas, and in some instances design is seen as merely a stylistic afterthought. Gradually, design is being regarded as a dynamic and central tactical business resource and consequently organisations globally look to design to help them innovate, differentiate and compete in a changing economic climate. Considering this, the question is raised; how can the specific knowledge and skills of designers be better articulated, understood, implemented and valued as a core component of strategic innovation in businesses? In seeking to answer this question, this paper proposes the new frontier of the design profession coined the ‘Design Innovation Catalyst’. This paper outlines the role of the new design professional and discusses the subsequent implications for design education. Furthermore, questions surrounding how designers will develop these new capabilities and how the design led innovation framework in application can contribute to the future of design will also be presented. It is anticipated that the findings from this research will help to better equip designers to enable them to play a more central role in business and strategic innovation now and in the future.
Resumo:
Critical road infrastructure (such as tunnels and overpasses) is of major significance to society and constitutes major components of interdependent, ‘systems and networks’. Failure in critical components of these wide area infrastructure systems can often result in cascading disturbances with secondary and tertiary impacts - some of which may become initiating sources of failure in their own right, triggering further systems failures across wider networks. Perrow1) considered the impact of our increasing use of technology in high-risk fields, analysing the implications on everyday life and argued that designers of these types of infrastructure systems cannot predict every possible failure scenario nor create perfect contingency plans for operators. Challenges exist for transport system operators in the conceptualisation and implementation of response and subsequent recovery planning for significant events. Disturbances can vary from reduced traffic flow causing traffic congestion throughout the local road network(s) and subsequent possible loss of income to businesses and industry to a major incident causing loss of life or complete loss of an asset. Many organisations and institutions, despite increasing recognition of the effects of crisis events, are not adequately prepared to manage crises2). It is argued that operators of land transport infrastructure are in a similar category of readiness given the recent instances of failures in road tunnels. These unexpected infrastructure failures, and their ultimately identified causes, suggest there is significant room for improvement. As a result, risk profiles for road transport systems are often complex due to the human behaviours and the inter-mix of technical and organisational components and the managerial coverage needed for the socio-technical components and the physical infrastructure. In this sense, the span of managerial oversight may require new approaches to asset management that combines the notion of risk and continuity management. This paper examines challenges in the planning of response and recovery practices of owner/operators of transport systems (above and below ground) in Australia covering: • Ageing or established infrastructure; and • New-build infrastructure. With reference to relevant international contexts this paper seeks to suggest options for enhancing the planning and practice for crisis response in these transport networks and as a result support the resilience of Critical Infrastructure.