993 resultados para commercial sensitivity
Resumo:
Life Cycle Cost Analysis provides a form of synopsis of the initial and consequential costs of building related decisions. These cost figures may be implemented to justify higher investments, for example, in the quality or flexibility of building solutions through a long term cost reduction. The emerging discipline of asset mnagement is a promising approach to this problem, because it can do things that techniques such as balanced scorecards and total quantity cannot. Decisions must be made about operating and maintaining infrastructure assets. An injudicious sensitivity of life cycle costing is that the longer something lasts, the less it costs over time. A life cycle cost analysis will be used as an economic evaluation tool and collaborate with various numbers of analyses. LCCA quantifies incurring costs commonly overlooked (by property and asset managers and designs) as replacement and maintenance costs. The purpose of this research is to examine the Life Cycle Cost Analysis on building floor materials. By implementing the life cycle cost analysis, the true cost of each material will be computed projecting 60 years as the building service life and 5.4% as the inflation rate percentage to classify and appreciate the different among the materials. The analysis results showed the high impact in selecting the floor materials according to the potential of service life cycle cost next.
Resumo:
ABSTR.4CT Senitivity of dot-immunobindinding ELf SA on nitrocellulose membrane (DotELISA)was compared with double-antibody sandwich ELISA (DAS-ELlSA) on polystyrene plates for the detection of bean yellow mosaic virus (BYMV), broad bean stain virus (WMV-2). Dot-ELISA was 2 and 1O times more sensitive than DAS-ELISA for the detection of BBSV and WMV-2, respectively, whereas DAS-ELISA was more sensitive than Dot-ELiSA for {he detection of BYMV. Both techniques were equally sensitive for the detection of BYDV. Using one day instead uf the two-day procedure, the four viruses were still detectable and the ralative sensitivity of both techniques remained the same.
Resumo:
Purpose. The objective of this study was to explore the discriminative capacity of non-contact corneal esthesiometry (NCCE) when compared with the neuropathy disability score (NDS) score—a validated, standard method of diagnosing clinically significant diabetic neuropathy. Methods. Eighty-one participants with type 2 diabetes, no history of ocular disease, trauma, or surgery and no history of systemic disease that may affect the cornea were enrolled. Participants were ineligible if there was history of neuropathy due to non-diabetic cause or current diabetic foot ulcer or infection. Corneal sensitivity threshold was measured on the eye of dominant hand side at a distance of 10 mm from the center of the cornea using a stimulus duration of 0.9 s. The NDS was measured producing a score ranging from 0 to 10. To determine the optimal cutoff point of corneal sensitivity that identified the presence of neuropathy (diagnosed by NDS), the Youden index and “closest-to-(0,1)” criteria were used. Results. The receiver-operator characteristic curve for NCCE for the presence of neuropathy (NDS ≥3) had an area under the curve of 0.73 (p = 0.001) and, for the presence of moderate neuropathy (NDS ≥6), area of 0.71 (p = 0.003). By using the Youden index, for an NDS ≥3, the sensitivity of NCCE was 70% and specificity was 75%, and a corneal sensitivity threshold of 0.66 mbar or higher indicated the presence of neuropathy. When NDS ≥6 (indicating risk of foot ulceration) was applied, the sensitivity was 52% with a specificity of 85%. Conclusions. NCCE is a sensitive test for the diagnosis of minimal and more advanced diabetic neuropathy and may serve as a useful surrogate marker for diabetic and perhaps other neuropathies.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
In their quest for resources to support children’s early literacy learning and development, parents encounter and traverse different spaces in which discourses and artifacts are produced and circulated. This paper uses conceptual tools from the field of geosemiotics to examine some commercial spaces designed for parents and children which foreground preschool learning and development. Drawing on data generated in a wider study I discuss some of the ways in which the material and virtual commercial spaces of a transnational shopping mall company and an educational toy company operate as sites of encounter between discourses and artifacts about children’s early learning and parents of preschoolers. I consider how companies connect with and ‘situate’ people as parents and customers, and then offer pathways designed for parents to follow as they attempt to meet their very young children’s learning and development needs. I argue that these pathways are both material and ideological, and that are increasingly tending to lead parents to the online commercial spaces of the world wide web. I show how companies are using the online environment and hybrid offline and online spaces and flows to reinforce an image of themselves as authoritative brokers of childhood resources for parents that is highly valuable in a policy climate which foregrounds lifelong learning and school readiness.
Resumo:
The Full Federal Court has once again been called upon to explore the limits of s51AA of the Trade Practices Act 1974 (Cth) in the context of a retail tenancy between commercially experienced parties. The decision is Australian Competition and Consumer Commission v Samton Holdings Pty Ltd [2002] FCA 62.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. AE is one of the several non-destructive testing (NDT) techniques currently used for structural health monitoring (SHM) of civil, mechanical and aerospace structures. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. Despite these advantages, several challenges still exist in successful application of AE monitoring. Accurate localization of AE sources, discrimination between genuine AE sources and spurious noise sources and damage quantification for severity assessment are some of the important issues in AE testing and will be discussed in this paper. Various data analysis and processing approaches will be applied to manage those issues.
Resumo:
This was the question that confronted Wilson J in Jarema Pty Ltd v Michihiko Kato [2004] QSC 451. Facts The plaintiff was the buyer of a commercial property at Bundall. The property comprised a 6 storey office building with a basement car park with 54 car parking spaces. The property was sold for $5 million with the contract being the standard REIQ/QLS form for Commercial Land and Buildings (2nd ed GST reprint). The contract provided for a “due diligence” period. During this period, the buyer’s solicitors discovered that there was no direct access from a public road to the car park entrance. Access to the car park was over a lot of which the Gold Coast City Council was the registered owner under a nomination of trustees, the Council holding the property on trust for car parking and town planning purposes. Due to the absence of a registered easement over the Council’s land, the buyer’s solicitors sought a reduction in the purchase price. The seller would not agree to this. Finally the sale was completed with the buyer reserving its rights to seek compensation.
Resumo:
There are many issues associated with good faith that will ultimately confront the Australian High Court and a number of these have been well canvassed. However, one significant issue has attracted relatively little comment. To date, a number of Australian courts (lower in the judicial hierarchy) have been prepared to hold directly, tacitly accept or assume (without making a final determination) that good faith is implied (as a matter of law) in the performance and enforcement of a very broad class of contract, namely commercial contracts per se. This broad approach is demonstrated in decisions from the Federal Court, the New South Wales Court of Appeal, the Supreme Courts of Victoria and Western Australia and has crept into pleadings in commercial matters in Queensland
Resumo:
The twists and turns in the ongoing development of the implied common law good faith obligation in the commercial contractual arena continue to prove fertile academic ground. Despite a lack of guidance from the High Court, the lower courts have been besieged by claims based, in part, on the implied obligation. Although lower court authority is lacking consistency and the ‘decisions in which lower courts have recognised the legitimacy of implication of a term of good faith vary in their suggested rationales’, the implied obligation may provide some comfort to a party to ‘at least some commercial contracts’ faced with a contractual counterpart exhibiting symptoms of bad faith.
Resumo:
In this paper we study both the level of Value-at-Risk (VaR) disclosure and the accuracy of the disclosed VaR figures for a sample of US and international commercial banks. To measure the level of VaR disclosures, we develop a VaR Disclosure Index that captures many different facets of market risk disclosure. Using panel data over the period 1996–2005, we find an overall upward trend in the quantity of information released to the public. We also find that Historical Simulation is by far the most popular VaR method. We assess the accuracy of VaR figures by studying the number of VaR exceedances and whether actual daily VaRs contain information about the volatility of subsequent trading revenues. Unlike the level of VaR disclosure, the quality of VaR disclosure shows no sign of improvement over time. We find that VaR computed using Historical Simulation contains very little information about future volatility.