934 resultados para relaxation to fixed points


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for calculating the in-bucket payload volume on a dragline for the purpose of estimating the material’s bulk density in real-time. Knowledge of the bulk density can provide instant feedback to mine planning and scheduling to improve blasting and in turn provide a more uniform bulk density across the excavation site. Furthermore costs and emissions in dragline operation, maintenance and downstream material processing can be reduced. The main challenge is to determine an accurate position and orientation of the bucket with the constraint of real-time performance. The proposed solution uses a range bearing and tilt sensor to locate and scan the bucket between the lift and dump stages of the dragline cycle. Various scanning strategies are investigated for their benefits in this real-time application. The bucket is segmented from the scene using cluster analysis while the pose of the bucket is calculated using the iterative closest point (ICP) algorithm. Payload points are segmented from the bucket by a fixed distance neighbour clustering method to preserve boundary points and exclude low density clusters introduced by overhead chains and the spreader bar. A height grid is then used to represent the payload from which the volume can be calculated by summing over the grid cells. We show volume calculated on a scaled system with an accuracy of greater than 95 per cent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heart damage caused by acute myocardial infarction (AMI) is a leading cause of death and disability in Australia. Novel therapies are still required for the treatment of this condition due to the poor reparative ability of the heart. As such, cellular therapies that assist in the recovery of heart muscle are of great current interest. Culture expanded mesenchymal stem cells (MSC) represent a stem and progenitor cell population that has been shown to promote tissue recovery in pre-clinical studies of AMI. For MSC-based therapies in the clinic, an intravenous route of administration would ideally be used due to the low cost, ease of delivery and relative safety. The study of MSC migration is therefore clinically relevant for a minimally invasive cell therapy to promote regeneration of damaged tissue. C57BL/6, UBI-GFP-BL/6 and CD44-/-/GFP+/+ mice were utilised to investigate mMSC migration. To assist in murine models of MSC migration, a novel method was used for the isolation of murine MSC (mMSC). These mMSC were then expanded in culture and putative mMSC were positive for Sca-1, CD90.2, and CD44 and were negative for CD45 and CD11b. Furthermore, mMSC from C57BL/6 and UBI-GFP-BL/6 mice were shown to differentiate into cells of the mesodermal lineage. Cells from CD44-/-/GFP+/+ mice were positive for Sca-1 and CD90.2, and negative for CD44, CD45 and CD11b however, these cells were unable to differentiate into adipocytes and chondrocytes and express lineage specific genes, PLIN and ACAN. Analysis of mMSC chemokine receptor (CR) expression showed that although mMSC do express chemokine receptors, (including those specific for chemokines released after AMI), these were low or undetectable by mRNA. However, protein expression could be detected, which was predominantly cytoplasmic. It was further shown that in both healthy (unperturbed) and inflamed tissues, mMSC had very little specific migration and engraftment after intravenous injection. To determine if poor mMSC migration was due to the inability of mMSC to respond to chemotactic stimuli, chemokine expression in bone marrow, skin injury and hearts (healthy and after AMI) was analysed at various time points by quantitative real-time PCR (qRT PCR). Many chemokines were up-regulated after skin biopsy and AMI, but the highest acute levels were found for CXCL12 and CCL7. Due to their high expression in infarcted hearts, the chemokines CXCL12 and CCL7 were tested for their effect on mMSC migration. Despite CR expression at both protein and mRNA levels, migration in response to CXCL12 and CCL7 was low in mMSC cultured on Nunclon plastic. A novel tissue culture plastic technology (UpCellTM) was then used that allowed gentle non-enzymatic dissociation of mMSC, thus preserving surface expression of the CRs. Despite this the in vitro data indicated that CXCL12 fails to induce significant migration ability of mMSC, while CCL7 induces significant, but low-level migration. We speculated this may be because of low levels of surface expression of chemokine receptors. In a strategy to increase cell surface expression of mMSC chemokine receptors and enhance their in vitro and in vivo migration capacity, mMSC were pre-treated with pro-inflammatory cytokines. Increased levels of both mRNA and surface protein expression were found for CRs by pre-treating mMSC with pro-inflammatory cytokines including TNF-á, IFN-ã, IL-1á and IL-6. Furthermore, the chemotactic response of mMSC to CXCL12 and CCL7 was significantly higher with these pretreated cells. Finally, the effectiveness of this type of cell manipulation was demonstrated in vivo, where mMSC pre-treated with TNF-á and IFN-ã showed significantly increased migration in skin injury and AMI models. Therefore this thesis has demonstrated, using in vitro and in vivo models, the potential for prior manipulation of MSC as a possible means for increasing the utility of intravenously delivery for MSC-based cellular therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the control and protection of a microgrid that is connected to utility through back-to-back converters. The back-to-back converter connection facilitates bidirectional power flow between the utility and the microgrid. These converters can operate in two different modes–one in which a fixed amount of power is drawn from the utility and the other in which the microgrid power shortfall is supplied by the utility. In the case of a fault in the utility or microgrid side, the protection system should act not only to clear the fault but also to block the back-to-back converters such that its dc bus voltage does not fall during fault. Furthermore, a converter internal mechanism prevents it from supplying high current during a fault and this complicates the operation of a protection system. To overcome this, an admittance based relay scheme is proposed, which has an inverse time characteristic based on measured admittance of the line. The proposed protection and control schemes are able to ensure reliable operation of the microgrid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Columns are one of the key load bearing elements that are highly susceptible to vehicle impacts. The resulting severe damages to columns may leads to failures of the supporting structure that are catastrophic in nature. However, the columns in existing structures are seldom designed for impact due to inadequacies of design guidelines. The impact behaviour of columns designed for gravity loads and actions other than impact is, therefore, of an interest. A comprehensive investigation is conducted on reinforced concrete column with a particular focus on investigating the vulnerability of the exposed columns and to implement mitigation techniques under low to medium velocity car and truck impacts. The investigation is based on non-linear explicit computer simulations of impacted columns followed by a comprehensive validation process. The impact is simulated using force pulses generated from full scale vehicle impact tests. A material model capable of simulating triaxial loading conditions is used in the analyses. Circular columns adequate in capacity for five to twenty story buildings, designed according to Australian standards are considered in the investigation. The crucial parameters associated with the routine column designs and the different load combinations applied at the serviceability stage on the typical columns are considered in detail. Axially loaded columns are examined at the initial stage and the investigation is extended to analyse the impact behaviour under single axis bending and biaxial bending. The impact capacity reduction under varying axial loads is also investigated. Effects of the various load combinations are quantified and residual capacity of the impacted columns based on the status of the damage and mitigation techniques are also presented. In addition, the contribution of the individual parameter to the failure load is scrutinized and analytical equations are developed to identify the critical impulses in terms of the geometrical and material properties of the impacted column. In particular, an innovative technique was developed and introduced to improve the accuracy of the equations where the other techniques are failed due to the shape of the error distribution. Above all, the equations can be used to quantify the critical impulse for three consecutive points (load combinations) located on the interaction diagram for one particular column. Consequently, linear interpolation can be used to quantify the critical impulse for the loading points that are located in-between on the interaction diagram. Having provided a known force and impulse pair for an average impact duration, this method can be extended to assess the vulnerability of columns for a general vehicle population based on an analytical method that can be used to quantify the critical peak forces under different impact durations. Therefore the contribution of this research is not only limited to produce simplified yet rational design guidelines and equations, but also provides a comprehensive solution to quantify the impact capacity while delivering new insight to the scientific community for dealing with impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of periodic thermal forcing on the flow field and heat transfer through an attic space are examined numerically in this paper. We consider the case with a fixed aspect ratio of 0.5 and a fixed Grashof number of 1.33×106. The numerical results reveal that, during the daytime, the flow is stratified; whereas at the night-time, the flow becomes unstable. A number of regular plumes and vortices are observed in the contours of isotherms and stream functions respectively. Moreover, the flow appears to be symmetric during the daytime, and becomes asymmetric at the night-time. It is also found that the flow is weaker during the daytime than that at the night-time in the present case, and the calculated heat transfer rate at the night-time is approximately three times greater than the heat transfer rate during the daytime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates of the half-life to convergence of prices across a panel of cities are subject to bias from three potential sources: inappropriate cross-sectional aggregation of heterogeneous coefficients, presence of lagged dependent variables in a model with individual fixed effects, and time aggregation of commodity prices. This paper finds no evidence of heterogeneity bias in annual CPI data for 17 U.S. cities from 1918 to 2006, but correcting for the “Nickell bias” and time aggregation bias produces a half-life of 7.5 years, shorter than estimates from previous studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to clarify the relationship between the mechanical environment at the fracture site and endogenous fibroblast growth factor-2 (FGF-2). We compared two types of fracture healing with different callus formations and cellular events using MouseFix(TM) plate fixation systems for murine fracture models. Left femoral fractures were induced in 72 ten-week-old mice and then fixed with a flexible (Group F) or rigid (Group R) Mouse Fix(TM) plate. Mice were sacrificed on days 3, 5, 7, 10, 14, and 21. The callus volumes were measured by 3D micro-CT and tissues were histologically stained with hematoxylin & eosin or safranin-O. Sections from days 3, 5, and 7 were immunostained for FGF-2 and Proliferating Cell Nuclear Antigen (PCNA). The callus in Group F was significantly larger than that in Group R. The rigid plate allowed bone union without a marked external callus or chondrogenesis. The flexible plate formed a large external callus as a result of endochondral ossification. Fibroblastic cells in the granulation tissue on days 5 and 7 in Group F showed marked FGF-2 expression compared with Group R. Fibroblastic cells showed ongoing proliferation in granulation tissue in group F, as indicated by PCNA expression, which explained the relative granulation tissue increase in group F. There were major differences in early phase endogenous FGF-2 expression between these two fracture healing processes, due to different mechanical environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study of photocatalytic oxidation of phenol over titanium dioxide films presents a method for the evaluation of true reaction kinetics. A flat plate reactor was designed for the specific purpose of investigating the influence of various reaction parameters, specifically photocatalytic film thickness, solution flow rate (1–8 l min−1), phenol concentration (20, 40 and 80 ppm), and irradiation intensity (70.6, 57.9, 37.1and 20.4 W m−2), in order to further understand their impact on the reaction kinetics. Special attention was given to the mass transfer phenomena and the influence of film thickness. The kinetics of phenol degradation were investigated with different irradiation levels and initial pollutant concentration. Photocatalytic degradation experiments were performed to evaluate the influence of mass transfer on the reaction and, in addition, the benzoic acid method was applied for the evaluation of mass transfer coefficient. For this study the reactor was modelled as a batch-recycle reactor. A system of equations that accounts for irradiation, mass transfer and reaction rate was developed to describe the photocatalytic process, to fit the experimental data and to obtain kinetic parameters. The rate of phenol photocatalytic oxidation was described by a Langmuir–Hinshelwood type law that included competitive adsorption and degradation of phenol and its by-products. The by-products were modelled through their additive effect on the solution total organic carbon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a study aimed at better understanding how staff and students adapt to new blended studio learning environments (BSLE’s), a group of 165 second year architecture students at a large school of architecture in Australia were separated into two different design studio learning environments. 70% of students were allocated to a traditional studio design learning environment (TSLE) and 30% to a new, high technology embedded, prototype digital learning laboratory. The digital learning laboratory was purpose designed for the case-study users, adapted Student-Centred Active Learning Environment for Undergraduate Programs (SCALE-UP) principles, and built as part of a larger university research project. The architecture students attended the same lectures, followed the same studio curriculum and completed the same pieces of assessment; the only major differences were the teaching staff and physical environment within which the studios were conducted. At the end of the semester, the staff and students were asked to complete a questionnaire about their experiences and preferences within the two respective learning environments. Following this, participants were invited to participate in focus groups, where a synergistic approach was effected. Using a dual method qualitative approach, the questionnaire and survey data were coded and extrapolated using both thematic analysis and grounded theory methodology. The results from these two different approaches were compared, contrasted and finally merged, to reveal six distinct emerging themes, which were instrumental in offering resistance or influencing adaptation to, the new BLSE. This paper reports on the study, discusses the major contributors to negative resistance and proposes points for consideration, when transitioning from a TSLE to a BLSE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses practical issues related to the use of the division model for lens distortion in multi-view geometry computation. A data normalisation strategy is presented, which has been absent from previous discussions on the topic. The convergence properties of the Rectangular Quadric Eigenvalue Problem solution for computing division model distortion are examined. It is shown that the existing method can require more than 1000 iterations when dealing with severe distortion. A method is presented for accelerating convergence to less than 10 iterations for any amount of distortion. The new method is shown to produce equivalent or better results than the existing method with up to two orders of magnitude reduction in iterations. Through detailed simulation it is found that the number of data points used to compute geometry and lens distortion has a strong influence on convergence speed and solution accuracy. It is recommended that more than the minimal number of data points be used when computing geometry using a robust estimator such as RANSAC. Adding two to four extra samples improves the convergence rate and accuracy sufficiently to compensate for the increased number of samples required by the RANSAC process.