216 resultados para Aldosterone Excess


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Under certain circumstances, an industrial hopper which operates under the "funnel-flow" regime can be converted to the "mass-flow" regime with the addition of a flow-corrective insert. This paper is concerned with calculating granular flow patterns near the outlet of hoppers that incorporate a particular type of insert, the cone-in-cone insert. The flow is considered to be quasi-static, and governed by the Coulomb-Mohr yield condition together with the non-dilatant double-shearing theory. In two dimensions, the hoppers are wedge-shaped, and as such the formulation for the wedge-in-wedge hopper also includes the case of asymmetrical hoppers. A perturbation approach, valid for high angles of internal friction, is used for both two-dimensional and axially symmetric flows, with analytic results possible for both leading order and correction terms. This perturbation scheme is compared with numerical solutions to the governing equations, and is shown to work very well for angles of internal friction in excess of 45 degree.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research has suggested that perceptual-motor difficulties may account for obese children's lower motor competence; however, specific evidence is currently lacking. Therefore, this study examined the effect of altered visual conditions on spatiotemporal and kinematic gait parameters in obese versus normal-weight children. Thirty-two obese and normal-weight children (11.2 ± 1.5 years) walked barefoot on an instrumented walkway at constant self-selected speed during LIGHT and DARK conditions. Three-dimensional motion analysis was performed to calculate spatiotemporal parameters, as well as sagittal trunk segment and lower extremity joint angles at heel-strike and toe-off. Self-selected speed did not significantly differ between groups. In the DARK condition, all participants walked at a significantly slower speed, decreased stride length, and increased stride width. Without normal vision, obese children had a more pronounced increase in relative double support time compared to the normal-weight group, resulting in a significantly greater percentage of the gait cycle spent in stance. Walking in the DARK, both groups showed greater forward tilt of the trunk and restricted hip movement. All participants had increased knee flexion at heel-strike, as well as decreased knee extension and ankle plantarflexion at toe-off in the DARK condition. The removal of normal vision affected obese children's temporal gait pattern to a larger extent than that of normal-weight peers. Results suggest an increased dependency on vision in obese children to control locomotion. Next to the mechanical problem of moving excess mass, a different coupling between perception and action appears to be governing obese children's motor coordination and control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Property management requires an understanding of infrastructure management, service life planning and quality management. Today, people are beginning to realize that effective property management in high-rise residential property can sustain the property value and maintaining high returns on their investment. The continuous growth of high-rise residential properties indicates that there is a need for an effective property management system to provide a sustainable high-rise residential property development. As intensive as these studies are, they do not attempt to investigate the correlation between property management systems with the trends of Malaysia high-rise residential property development. By examining the trends and scenario of Malaysia high-rise residential property development, this paper aims to gain an understanding of impacts from the effectiveness of property management in this scope area. Findings from this scoping paper will assist in providing a greater understanding and possible solutions for the current Malaysian property management systems for the expanding high-rise residential unit market. With current high rise units in excess of 1.3 million and increasing, the need for more cost effective management systems are of highly important to the Malaysian Property Industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT Twelve beam-to-column connections between cold-formed steel sections consisting of three beam depths and four connection types were tested in isolation to investigate their behavior based on strength, stiffness and ductility. Resulting moment-rotation curves indicate that the tested connections are efficient moment connections where moment capacities ranged from about 65% to 100% of the connected beam capac-ity. With a moment capacity of greater than 80% of connected beam member capacity, some of the connec-tions can be regarded as full strength connections. Connections also possessed sufficient ductility with rota-tions of 20 mRad at failure although some connections were too ductile with rotations in excess of 30 mRad. Generally, most of the connections possess the strength and ductility to be considered as partial strength con-nections. The ultimate failures of almost all of the connections were due to local buckling of the compression web and flange elements of the beam closest to the connection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic variation is the resource animal breeders exploit in stock improvement programs. Both the process of selection and husbandry practices employed in aquaculture will erode genetic variation levels overtime, hence the critical resource can be lost and this may compromise future genetic gains in breeding programs. The amount of genetic variation in five lines of Sydney Rock Oyster (SRO) that had been selected for QX (Queensland unknown) disease resistance were examined and compared with that in a wild reference population using seven specific SRO microsatellite loci. The five selected lines had significantly lower levels of genetic diversity than did the wild reference population with allelic diversity declining approximately 80%, but impacts on heterozygosity per locus were less severe. Significant deficiencies in heterozygotes were detected at six of the seven loci in both mass selected lines and the wild reference population. Against this trend however, a significant excess of heterozygotes was recorded at three loci Sgo9, Sgo14 and Sgo21 in three QX disease resistant lines (#2, #5 and #13). All populations were significantly genetic differentiated from each other based on pairwise FST values. A neighbour joining tree based on DA genetic distances showed a clear separation between all culture and wild populations. Results of this study show clearly, that the impacts of the stock improvement program for SRO has significantly eroded natural levels of genetic variation in the culture lines. This could compromise long-term genetic gains and affect sustainability of the SRO breeding program over the long-term.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rare earth element geochemistry in carbonate rocks is utilized increasingly for studying both modern oceans and palaeoceanography, with additional applications for investigating water–rock interactions in groundwater and carbonate diagenesis. However, the study of rare earth element geochemistry in ancient rocks requires the preservation of their distribution patterns through subsequent diagenesis. The subjects of this study, Pleistocene scleractinian coral skeletons from Windley Key, Florida, have undergone partial to complete neomorphism from aragonite to calcite in a meteoric setting; they allow direct comparison of rare earth element distributions in original coral skeleton and in neomorphic calcite. Neomorphism occurred in a vadose setting along a thin film, with degradation of organic matter playing an initial role in controlling the morphology of the diagenetic front. As expected, minor element concentrations vary significantly between skeletal aragonite and neomorphic calcite, with Sr, Ba and U decreasing in concentration and Mn increasing in concentration in the calcite, suggesting that neomorphism took place in an open system. However, rare earth elements were largely retained during neomorphism, with precipitating cements taking up excess rare earth elements released from dissolved carbonates from higher in the karst system. Preserved rare earth element patterns in the stabilized calcite closely reflect the original rare earth element patterns of the corals and associated reef carbonates. However, minor increases in light rare earth element depletion and negative Ce anomalies may reflect shallow oxidized groundwater processes, whereas decreasing light rare earth element depletion may reflect mixing of rare earth elements from associated microbialites or contamination from insoluble residues. Regardless of these minor disturbances, the results indicate that rare earth elements, unlike many minor elements, behave very conservatively during meteoric diagenesis. As the meteoric transformation of aragonite to calcite is a near worst case scenario for survival of original marine trace element distributions, this study suggests that original rare earth element patterns may commonly be preserved in ancient limestones, thus providing support for the use of ancient marine limestones as proxies for marine rare earth element geochemistry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early models of bankruptcy prediction employed financial ratios drawn from pre-bankruptcy financial statements and performed well both in-sample and out-of-sample. Since then there has been an ongoing effort in the literature to develop models with even greater predictive performance. A significant innovation in the literature was the introduction into bankruptcy prediction models of capital market data such as excess stock returns and stock return volatility, along with the application of the Black–Scholes–Merton option-pricing model. In this note, we test five key bankruptcy models from the literature using an upto- date data set and find that they each contain unique information regarding the probability of bankruptcy but that their performance varies over time. We build a new model comprising key variables from each of the five models and add a new variable that proxies for the degree of diversification within the firm. The degree of diversification is shown to be negatively associated with the risk of bankruptcy. This more general model outperforms the existing models in a variety of in-sample and out-of-sample tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People living with lymphohematopoietic neoplasms (LHNs) are known to have increased risks of second cancer; however, the incidence of second cancers after LHNs has not been studied extensively in Australia. The Australian Cancer Database was used to analyze site-specific risk of second primary cancer after LHNs in 127,707 patients diagnosed between 1983 and 2005. Standardized incidence ratios (SIRs) were calculated using population rates. Overall, patients with an LHN had nearly twice the risk of developing a second cancer compared to the Australian population. Among 40,321 patients with non-Hodgkin's lymphoma (NHL), there was over a fourfold significant increase in melanoma, Kaposi sarcoma, cancer of the lip, connective tissue and peripheral nerves, eye, thyroid, Hodgkin's disease (HD) and myeloid leukemia. Among 6,396 patients with HD, there was over a fourfold significant increase in melanoma, Kaposi sarcoma, cancer of the lip, oral cavity and pharynx, female breast, uterine cervix, testis, thyroid, NHL and myeloid leukemia. Among the 33,025 patients with lymphoid and myeloid leukemia, significant excess were seen for cancers of the lip, eye, connective tissue and peripheral nerves, NHL and HD. Among the 13,856 patients with plasma cell tumors, there was over fourfold significant increase for melanoma, cancer of the connective tissue and peripheral nerves and myeloid leukemia. Our findings provide evidence of an increased risk of cancer, particularly ultraviolet radiation- and immunosuppression-related cancers, after an LHN in Australia. Copyright © 2010 UICC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have grown defect-rich ZnO nanowires on a large scale by the vapour phase reaction method without using any metal catalyst and vacuum system. The defects, including zinc vacancies, oxygen interstitials and oxygen antisites, are related to the excess of oxygen in ZnO nanowires and are controllable. The nanowires having high excess of oxygen exhibit a brown-colour photoluminescence, due to the dominant emission band composed by violet, blue and green emissions. Those having more balanced Zn and O show a dominant green emission, giving rise to a green colour under UV light illumination. By O2-annealing treatment the violet luminescence after the band-edge emission UV peak can be enhanced for as-grown nanowires. However, the green emission shows different changing trends under O2-annealing treatment, associated with the excess of oxygen in the nanowires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work examines the effect of landmark placement on the efficiency and accuracy of risk-bounded searches over probabilistic costmaps for mobile robot path planning. In previous work, risk-bounded searches were shown to offer in excess of 70% efficiency increases over normal heuristic search methods. The technique relies on precomputing distance estimates to landmarks which are then used to produce probability distributions over exact heuristics for use in heuristic searches such as A* and D*. The location and number of these landmarks therefore influence greatly the efficiency of the search and the quality of the risk bounds. Here four new methods of selecting landmarks for risk based search are evaluated. Results are shown which demonstrate that landmark selection needs to take into account the centrality of the landmark, and that diminishing rewards are obtained from using large numbers of landmarks.