292 resultados para Generalised Inverse
Resumo:
A generalised bidding model is developed to calculate a bidder’s expected profit and auctioners expected revenue/payment for both a General Independent Value and Independent Private Value (IPV) kmth price sealed-bid auction (where the mth bidder wins at the kth bid payment) using a linear (affine) mark-up function. The Common Value (CV) assumption, and highbid and lowbid symmetric and asymmetric First Price Auctions and Second Price Auctions are included as special cases. The optimal n bidder symmetric analytical results are then provided for the uniform IPV and CV models in equilibrium. Final comments concern implications, the assumptions involved and prospects for further research.
Resumo:
Office building retrofit is a sector being highlighted in Australia because of the mature office building market characterised by a large proportion of ageing properties. The increasing number of office building retrofit projects strengthens the need for waste management. Retrofit projects possess unique characteristics in comparison to traditional demolition and new builds such as partial operation of buildings, constrained site spaces and limited access to as-build information. Waste management activities in retrofit projects can be influenced by issues that are different from traditional construction and demolition projects. However, previous research on building retrofit projects has not provided an understanding of the critical issues affecting waste management. This research identifies the critical factors which influence the management of waste in office building retrofit projects through a literature study and a questionnaire survey to industry practitioners. Statistical analysis on a range of potential waste issues reveals the critical factors, as agreed upon by survey respondents in consideration of their different professional responsibilities and work natures. The factors are grouped into five dimensions, comprising industry culture, organisational support and incentive, existing building information, design, and project delivery process. The discussions of the dimensions indicate that the waste management factors of office building retrofit projects are further intensified compared to those for general demolition and construction because retrofit projects involve existing buildings which are partially operating with constrained work space and limited building information. Recommendations for improving waste management in office building retrofit projects are generalised such as waste planning, auditing and assessment in the planning and designing stage, collaboration and coordination of various stakeholders and different specialists, optimised building surveying and BIM technologies for waste analysis, and new design strategies for waste prevention.
Resumo:
This paper offers an uncertainty quantification (UQ) study applied to the performance analysis of the ERCOFTAC conical diffuser. A deterministic CFD solver is coupled with a non-statistical generalised Polynomial Chaos(gPC)representation based on a pseudo-spectral projection method. Such approach has the advantage to not require any modification of the CFD code for the propagation of random disturbances in the aerodynamic field. The stochactic results highlihgt the importance of the inlet velocity uncertainties on the pressure recovery both alone and when coupled with a second uncertain variable. From a theoretical point of view, we investigate the possibility to build our gPC representation on arbitray grid, thus increasing the flexibility of the stochastic framework.
Resumo:
The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.
Resumo:
Background: This study attempted to develop health risk-based metrics for defining a heatwave in Brisbane, Australia. Methods: Poisson generalised additive model was performed to assess the impact of heatwaves on mortality and emergency hospital admissions (EHAs) in Brisbane. Results: In general, the higher the intensity and the longer the duration of a heatwave, the greater the health impacts. There was no apparent difference in EHAs risk during different periods of a warm season. However, there was a greater risk of mortality in the second half of a warm season than that in the first half. While elderly (>75 years)were particularly vulnerable to both the EHA and mortality effects of a heatwave, the risk for EHAs also significantly increased for two other age groups (0-64 years and 65-74 years) during severe heatwaves. Different patterns between cardiorespiratory mortality and EHAs were observed. Based on these findings, we propose the use of a teiered heat warning system based on the health risk of heatwave. Conclusions: Health risk-based metrics are a useful tool for the development of local heatwave definitions. thsi tool may have significant implications for the assessment of heatwave-related health consequences and development of heatwave response plans and implementation strategies.
Resumo:
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
Resumo:
This paper presents a trajectory-tracking control strategy for a class of mechanical systems in Hamiltonian form. The class is characterised by a simplectic interconnection arising from the use of generalised coordinates and full actuation. The tracking error dynamic is modelled as a port-Hamiltonian Systems (PHS). The control action is designed to take the error dynamics into a desired closed-loop PHS characterised by a constant mass matrix and a potential energy with a minimum at the origin. A transformation of the momentum and a feedback control is exploited to obtain a constant generalised mass matrix in closed loop. The stability of the close-loop system is shown using the close-loop Hamiltonian as a Lyapunov function. The paper also considers the addition of integral action to design a robust controller that ensures tracking in spite of disturbances. As a case study, the proposed control design methodology is applied to a fully actuated robotic manipulator.
Resumo:
Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.
Resumo:
The role of different chemical compounds, particularly organics, involved in the new particle formation (NPF) and its consequent growth are not fully understood. Therefore, this study was conducted to investigate the chemistry of aerosol particles during NPF events in an urban subtropical environment. Aerosol chemical composition was measured along with particle number size distribution (PNSD) and several other air quality parameters at five sites across an urban subtropical environment. An Aerodyne compact Time-of-Flight Aerosol Mass Spectrometer (c-TOF-AMS) and a TSI Scanning Mobility Particle Sizer (SMPS) measured aerosol chemical composition and PNSD, respectively. Five NPF events, with growth rates in the range 3.3-4.6 nm, were detected at two sites. The NPF events happened on relatively warmer days with lower humidity and higher solar radiation. Temporal percent fractions of nitrate, sulphate, ammonium and organics were modelled using the Generalised Additive Model (GAM), with a basis of penalised spline. Percent fractions of organics increased after the NPF events, while the mass fraction of ammonium and sulphate decreased. This uncovered the important role of organics in the growth of newly formed particles. Three organic markers, factors f43, f44 and f57, were calculated and the f44 vs f43 trends were compared between nucleation and non-nucleation days. f44 vs f43 followed a different pattern on nucleation days compared to non-nucleation days, whereby f43 decreased for vehicle emission generated particles, while both f44 and f43 decreased for NPF generated particles. It was found for the first time that vehicle generated and newly formed particles cluster in different locations on f44 vs f43 plot and this finding can be used as a tool for source apportionment of measured particles.
Resumo:
The finite element method in principle adaptively divides the continuous domain with complex geometry into discrete simple subdomain by using an approximate element function, and the continuous element loads are also converted into the nodal load by means of the traditional lumping and consistent load methods, which can standardise a plethora of element loads into a typical numerical procedure, but element load effect is restricted to the nodal solution. It in turn means the accurate continuous element solutions with the element load effects are merely restricted to element nodes discretely, and further limited to either displacement or force field depending on which type of approximate function is derived. On the other hand, the analytical stability functions can give the accurate continuous element solutions due to element loads. Unfortunately, the expressions of stability functions are very diverse and distinct when subjected to different element loads that deter the numerical routine for practical applications. To this end, this paper presents a displacement-based finite element function (generalised element load method) with a plethora of element load effects in the similar fashion that never be achieved by the stability function, as well as it can generate the continuous first- and second-order elastic displacement and force solutions along an element without loss of accuracy considerably as the analytical approach that never be achieved by neither the lumping nor consistent load methods. Hence, the salient and unique features of this paper (generalised element load method) embody its robustness, versatility and accuracy in continuous element solutions when subjected to the great diversity of transverse element loads.
Resumo:
In transport networks, Origin-Destination matrices (ODM) are classically estimated from road traffic counts whereas recent technologies grant also access to sample car trajectories. One example is the deployment in cities of Bluetooth scanners that measure the trajectories of Bluetooth equipped cars. Exploiting such sample trajectory information, the classical ODM estimation problem is here extended into a link-dependent ODM (LODM) one. This much larger size estimation problem is formulated here in a variational form as an inverse problem. We develop a convex optimization resolution algorithm that incorporates network constraints. We study the result of the proposed algorithm on simulated network traffic.
Resumo:
The desire to solve problems caused by socket prostheses in transfemoral amputees and the acquired success of osseointegration in the dental application has led to the introduction of osseointegration in the orthopedic surgery. Since its first introduction in 1990 in Gothenburg Sweden the osseointegrated (OI) orthopedic fixation has proven several benefits[1]. The surgery consists of two surgical procedures followed by a lengthy rehabilitation program. The rehabilitation program after an OI implant includes a specific training period with a short training prosthesis. Since mechanical loading is considered to be one of the key factors that influence bone mass and the osseointegration of bone-anchored implants, the rehabilitation program will also need to include some form of load bearing exercises (LBE). To date there are two frequently used commercially available human implants. We can find proof in the literature that load bearing exercises are performed by patients with both types of OI implants. We refer to two articles, a first one written by Dr. Aschoff and all and published in 2010 in the Journal of Bone and Joint Surgery.[2] The second one presented by Hagberg et al in 2009 gives a very thorough description of the rehabilitation program of TFA fitted with an OPRA implant. The progression of the load however is determined individually according to the residual skeleton’s quality, pain level and body weight of the participant.[1] Patients are using a classical bathroom weighing scale to control the load on the implant during the course of their rehabilitation. The bathroom scale is an affordable and easy-to-use device but it has some important shortcomings. The scale provides instantaneous feedback to the patient only on the magnitude of the vertical component of the applied force. The forces and moments applied along and around the three axes of the implant are unknown. Although there are different ways to assess the load on the implant for instance through inverse dynamics in a motion analysis laboratory [3-6] this assessment is challenging. A recent proof- of-concept study by Frossard et al (2009) showed that the shortcomings of the weighing scale can be overcome by a portable kinetic system based on a commercial transducer[7].
Resumo:
Study Design: Comparative analysis Background: Calculations of lower limbs kinetics are limited by floor-mounted force-plates. Objectives: Comparison of hip joint moments, power and mechanical work on the prosthetic limb of a transfemoral amputee calculated by inverse dynamics using either the ground reactions (force-plates) or knee reactions (transducer). Methods: Kinematics, ground reactions and knee reactions were collected using a motion analysis system, two force-plates and a multi-axial transducer mounted below the socket, respectively. Results: The inverse dynamics using ground reactions under-estimated the peaks of hip energy generation and absorption occurring at 63 % and 76 % of the gait cycle (GC) by 28 % and 54 %, respectively. This method over-estimated a phase of negative work at the hip (from 37 %GC to 56 %GC) by 24%. It under-estimated the phases of positive (from 57 %GC to 72 %GC) and negative (from 73 %GC to 98 %GC) work at the hip by 11 % and 58%, respectively. Conclusions: A transducer mounted within the prosthesis has the capacity to provide more realistic kinetics of the prosthetic limb because it enables assessment of multiple consecutive steps and a wide range of activities without issues of foot placement on force-plates. CLINICAL RELEVANCE The hip is the only joint that an amputee controls directly to set in motion the prosthesis. Hip joint kinetics are associated with joint degeneration, low back pain, risks of fall, etc. Therefore, realistic assessment of hip kinetics over multiple gait cycles and a wide range of activities is essential.
Resumo:
The understanding of the load applied on the residuum through the prosthesis of individuals with transfemoral amputation (TFA) is essential to address a number of concerns that could strongly reduce their quality of life (e.g., residuum skin lesion, prosthesis fitting, alignment). This inner prosthesis loading could be estimated using a typical gait laboratory relying on inverse dynamics equations. Alternative, technological advances proposed over the last decade enabled direct measurement of this kinetic information in a broad variety of situations that could potentially be more relevant in clinical settings. The purposes of this presentation are (A) to review the literature about recent developments in measure and analyses of inner prosthesis loading of TFA, and (B) to extract information that could potentially contribute to a better evidence-based practice.
Resumo:
In this work we present an autonomous mobile ma- nipulator that is used to collect sample containers in an unknown environment. The manipulator is part of a team of heterogeneous mobile robots that are to search and identify sample containers in an unknown environment. A map of the environment along with possible positions of sample containers are shared between the robots in the team by using a cloud-based communication interface. To grasp a container with its manipulator arm the robot has to place itself in a position suitable for the manipulation task. This optimal base placement pose is selected by querying a precomputed inverse reachability database.