875 resultados para vulnerability curves
Resumo:
The main focus of this paper is the motion planning problem for a deeply submerged rigid body. The equations of motion are formulated and presented by use of the framework of differential geometry and these equations incorporate external dissipative and restoring forces. We consider a kinematic reduction of the affine connection control system for the rigid body submerged in an ideal fluid, and present an extension of this reduction to the forced affine connection control system for the rigid body submerged in a viscous fluid. The motion planning strategy is based on kinematic motions; the integral curves of rank one kinematic reductions. This method is of particular interest to autonomous underwater vehicles which can not directly control all six degrees of freedom (such as torpedo shaped AUVs) or in case of actuator failure (i.e., under-actuated scenario). A practical example is included to illustrate our technique.
Decoupled trajectory planning for a submerged rigid body subject to dissipative and potential forces
Resumo:
This paper studies the practical but challenging problem of motion planning for a deeply submerged rigid body. Here, we formulate the dynamic equations of motion of a submerged rigid body under the architecture of differential geometric mechanics and include external dissipative and potential forces. The mechanical system is represented as a forced affine-connection control system on the configuration space SE(3). Solutions to the motion planning problem are computed by concatenating and reparameterizing the integral curves of decoupling vector fields. We provide an extension to this inverse kinematic method to compensate for external potential forces caused by buoyancy and gravity. We present a mission scenario and implement the theoretically computed control strategy onto a test-bed autonomous underwater vehicle. This scenario emphasizes the use of this motion planning technique in the under-actuated situation; the vehicle loses direct control on one or more degrees of freedom. We include experimental results to illustrate our technique and validate our method.
Resumo:
With the growing importance of sustainability assessment in the construction industry, many green building rating schemes have been adopted in the building sector of Australia. However, there is an abnormal delay in the similar adoption in the infrastructure sector. This prolonged delay in practice poses a challenge in mapping the project objectives with sustainability outcomes. Responding to the challenge of sustainable development in infrastructure, it is critical to create a set of decision indicators for sustainability in infrastructure, which to be used in conjunction with the emerging infrastructure sustainability assessment framework of the Australian Green Infrastructure Council. The various literature sources confirm the lack of correlation between sustainability and infrastructure. This theoretical missing link signifies the crucial validation of the interrelationship and interdependency in sustainability, decision making and infrastructure. This validation is vital for the development of decision indicators for sustainability in infrastructure. Admittedly, underpinned by the serious socio-environmental vulnerability, the traditional focus on economic emphasis in infrastructure development needs to be drifted towards the appropriate decisions for sustainability enhancing the positive social and environmental outcomes. Moreover, the research findings suggest sustainability being observed as powerful socio-political and influential socio-environmental driver in deciding the infrastructure needs and its development. These newly developed sustainability decision indicators create the impetus for change leading to sustainability in infrastructure by integrating the societal cares, environmental concerns into the holistic financial consideration. Radically, this development seeks to transform principles into actions for infrastructure sustainability. Lastly, the thesis concludes with knowledge contribution in five significant areas and future research opportunities. The consolidated research outcomes suggest that the development of decision indicators has demonstrated sustainability as a pivotal driver for decision making in infrastructure.
Resumo:
The transformation of urban spaces that occurs once darkness falls is simultaneously exhilarating and menacing, and over the past 20 months we have investigated the potential for mobile technology to help users manage their personal safety concerns in the city at night. Our findings subverted commonly held notions of vulnerability, with the threat of violence felt equally by men and women. But while women felt protected because of their mobile technology, men dismissed it as digital Man Mace. We addressed this macho design challenge by studying remote engineers in outback Australia to inspire our personal safety design prototype MATE (Mobile Artifact for Taming Environments).
Resumo:
Damage to genetic material represents a persistent and ubiquitous threat to genomic stability. Once DNA damage is detected, a multifaceted signaling network is activated that halts the cell cycle, initiates repair, and in some instances induces apoptotic cell death. In this article, we will review DNA damage surveillance networks, which maintain the stability of our genome, and discuss the efforts underway to identify chemotherapeutic compounds targeting the core components of DNA double-strand breaks (DSB) response pathway. The majority of tumor cells have defects in maintaining genomic stability owing to the loss of an appropriate response to DNA damage. New anticancer agents are exploiting this vulnerability of cancer cells to enhance therapeutic indexes, with limited normal tissue toxicity. Recently inhibitors of the checkpoint kinases Chk1 and Chk2 have been shown to sensitize tumor cells to DNA damaging agents. In addition, the treatment of BRCA1- or BRCA2-deficient tumor cells with poly(ADP-ribose) polymerase (PARP) inhibitors also leads to specific tumor killing. Due to the numerous roles of p53 in genomic stability and its defects in many human cancers, therapeutic agents that restore p53 activity in tumors are the subject of multiple clinical trials. In this article we highlight the proteins mentioned above and catalog several additional players in the DNA damage response pathway, including ATM, DNA-PK, and the MRN complex, which might be amenable to pharmacological interventions and lead to new approaches to sensitize cancer cells to radio- and chemotherapy. The challenge is how to identify those patients most receptive to these treatments.
Resumo:
Background: The enthesis of the plantar fascia is thought to play an important role in stress dissipation. However, the potential link between entheseal thickening characteristic of enthesopathy and the stress-dissipating properties of the intervening plantar fat pad have not been investigated. Purpose: This study was conducted to identify whether plantar fat pad mechanics explain variance in the thickness of the fascial enthesis in individuals with and without plantar enthesopathy. Study Design: Case-control study; Level of evidence, 3. Methods: The study population consisted of 9 patients with unilateral plantar enthesopathy and 9 asymptomatic, individually matched controls. The thickness of the enthesis of the symptomatic, asymptomatic, and a matched control limb was acquired using high-resolution ultrasound. The compressive strain of the plantar fat pad during walking was estimated from dynamic lateral radiographs acquired with a multifunction fluoroscopy unit. Peak compressive stress was simultaneously acquired via a pressure platform. Principal viscoelastic parameters were estimated from subsequent stress-strain curves. Results: The symptomatic fascial enthesis (6.7 ± 2.0 mm) was significantly thicker than the asymptomatic enthesis (4.2 ± 0.4 mm), which in turn was thicker than the enthesis (3.3 ± 0.4 mm) of control limbs (P < .05). There was no significant difference in the mean thickness, peak stress, peak strain, or secant modulus of the plantar fat pad between limbs. However, the energy dissipated by the fat pad during loading and unloading was significantly lower in the symptomatic limb (0.55 ± 0.17) when compared with asymptomatic (0.69 ± 0.13) and control (0.70 ± 0.09) limbs (P < .05). The sonographic thickness of the enthesis was correlated with the energy dissipation ratio of the plantar fat pad (r = .72, P < .05), but only in the symptomatic limb. Conclusion: The energy-dissipating properties of the plantar fat pad are associated with the sonograpic appearance of the enthesis in symptomatic limbs, providing a previously unidentified link between the mechanical behavior of the plantar fat pad and enthesopathy.
Resumo:
Skid resistance is a condition parameter characterising the contribution that a road makes to the friction between a road surface and a vehicle tyre. Studies of traffic crash histories around the world have consistently found that a disproportionate number of crashes occur where the road surface has a low level of surface friction and/or surface texture, particularly when the road surface is wet. Various research results have been published over many years and have tried to quantify the influence of skid resistance on accident occurrence and to characterise a correlation between skid resistance and accident frequency. Most of the research studies used simple statistical correlation methods in analysing skid resistance and crash data.----- ------ Preliminary findings of a systematic and extensive literature search conclude that there is rarely a single causation factor in a crash. Findings from research projects do affirm various levels of correlation between skid resistance and accident occurrence. Studies indicate that the level of skid resistance at critical places such as intersections, curves, roundabouts, ramps and approaches to pedestrian crossings needs to be well maintained.----- ----- Management of risk is an integral aspect of the Queensland Department of Main Roads (QDMR) strategy for managing its infrastructure assets. The risk-based approach has been used in many areas of infrastructure engineering. However, very limited information is reported on using risk-based approach to mitigate crash rates related to road surface. Low skid resistance and surface texture may increase the risk of traffic crashes.----- ----- The objectives of this paper are to explore current issues of skid resistance in relation to crashes, to provide a framework of probability-based approach to be adopted by QDMR in assessing the relationship between crash accidents and pavement properties, and to explain why the probability-based approach is a suitable tool for QDMR in order to reduce accident rates due to skid resistance.
Resumo:
In an environment where economic, political and technological change is the rule, a fundamental business strategy should be the defence of traditional markets and thoughtful entry into new markets, with an aim to increase market penetration and stimulate profit. The success of such a strategy will depend on the success of firms to do more and better for customers than their competitors. In other words, the firm’s primary competitive advantage will come from changes they implement to please their customers. In the construction industry, complexity of technical knowledge and construction processes have traditionally encouraged clients to play a largely passive role in the management of their project. However, today’s clients not only want to know about internal efficiency of their projects but also need to know how they and their contractors compare and compete against their competitors. Given the vulnerability of construction activities in the face of regional financial crisis, constructors need to be proactive in the search to improve their internal firm and project processes to ensure profitability and market responsiveness. In this context, reengineering is a radical design that emphasises customer satisfaction rather than cost reduction This paper discusses the crucial role of the client-project interface and how project networks could facilitate and improve information dissemination and sharing, collaborative efforts, decision-making and improved project climate. An intra-project network model is presented, and project managers’ roles and competencies in forming and coordinating project workgroups is discussed.
Resumo:
Partially Grouted Reinforced Masonry (PGRM) shear walls perform well in places where the cyclonic wind pressure dominates the design. Their out-of-plane flexural performance is better understood than their inplane shear behaviour; in particular, it is not clear whether the PGRM shear walls act as unreinforced masonry (URM) walls embedded with discrete reinforced grouted cores or as integral systems of reinforced masonry (RM) with wider spacing of reinforcement. With a view to understanding the inplane response of PGRM shear walls, ten full scale single leaf, clay block walls were constructed and tested under monotonic and cyclic inplane loading cases. It has been shown that where the spacing of the vertical reinforcement is less than 2000mm, the walls behave as an integral system of RM; for spacing greater than 2000mm, the walls behave similar to URM with no significant benefit from the reinforced cores based on the displacement ductility and stiffness degradation factors derived from the complete lateral load – lateral displacement curves.
Resumo:
Research in structural dynamics has received considerable attention due to problems associated with emerging slender structures, increased vulnerability of structures to random loads and aging infrastructure. This paper briefly describes some such research carried out on i) dynamics of composite floor structure, ii) dynamics of cable supported footbridge, iii) seismic mitigation of frame-shear wall structure using passive dampers and iv) development of a damage assessment model for use in structural health modelling.
Resumo:
ABSTRACT Twelve beam-to-column connections between cold-formed steel sections consisting of three beam depths and four connection types were tested in isolation to investigate their behavior based on strength, stiffness and ductility. Resulting moment-rotation curves indicate that the tested connections are efficient moment connections where moment capacities ranged from about 65% to 100% of the connected beam capac-ity. With a moment capacity of greater than 80% of connected beam member capacity, some of the connec-tions can be regarded as full strength connections. Connections also possessed sufficient ductility with rota-tions of 20 mRad at failure although some connections were too ductile with rotations in excess of 30 mRad. Generally, most of the connections possess the strength and ductility to be considered as partial strength con-nections. The ultimate failures of almost all of the connections were due to local buckling of the compression web and flange elements of the beam closest to the connection.
Resumo:
Background: Waist circumference has been identified as a valuable predictor of cardiovascular risk in children. The development of waist circumference percentiles and cut-offs for various ethnic groups are necessary because of differences in body composition. The purpose of this study was to develop waist circumference percentiles for Chinese children and to explore optimal waist circumference cut-off values for predicting cardiovascular risk factors clustering in this population.----- ----- Methods: Height, weight, and waist circumference were measured in 5529 children (2830 boys and 2699 girls) aged 6-12 years randomly selected from southern and northern China. Blood pressure, fasting triglycerides, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and glucose were obtained in a subsample (n = 1845). Smoothed percentile curves were produced using the LMS method. Receiver-operating characteristic analysis was used to derive the optimal age- and gender-specific waist circumference thresholds for predicting the clustering of cardiovascular risk factors.----- ----- Results: Gender-specific waist circumference percentiles were constructed. The waist circumference thresholds were at the 90th and 84th percentiles for Chinese boys and girls respectively, with sensitivity and specificity ranging from 67% to 83%. The odds ratio of a clustering of cardiovascular risk factors among boys and girls with a higher value than cut-off points was 10.349 (95% confidence interval 4.466 to 23.979) and 8.084 (95% confidence interval 3.147 to 20.767) compared with their counterparts.----- ----- Conclusions: Percentile curves for waist circumference of Chinese children are provided. The cut-off point for waist circumference to predict cardiovascular risk factors clustering is at the 90th and 84th percentiles for Chinese boys and girls, respectively.
Resumo:
This paper presents a material model to simulate load induced cracking in Reinforced Concrete (RC) elements in ABAQUS finite element package. Two numerical material models are used and combined to simulate complete stress-strain behaviour of concrete under compression and tension including damage properties. Both numerical techniques used in the present material model are capable of developing the stress-strain curves including strain softening regimes only using ultimate compressive strength of concrete, which is easily and practically obtainable for many of the existing RC structures or those to be built. Therefore, the method proposed in this paper is valuable in assessing existing RC structures in the absence of more detailed test results. The numerical models are slightly modified from the original versions to be comparable with the damaged plasticity model used in ABAQUS. The model is validated using different experiment results for RC beam elements presented in the literature. The results indicate a good agreement with load vs. displacement curve and observed crack patterns.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.