153 resultados para Amazon metric
em Queensland University of Technology - ePrints Archive
Resumo:
Dynamic load sharing can be defined as a measure of the ability of a heavy vehicle multi-axle group to equalise load across its wheels under typical travel conditions; i.e. in the dynamic sense at typical travel speeds and operating conditions of that vehicle. Various attempts have been made to quantify the ability of heavy vehicles to equalise the load across their wheels during travel. One of these was the concept of the load sharing coefficient (LSC). Other metrics such as the dynamic load coefficient (DLC) have been used to compare one heavy vehicle suspension with another for potential road damage. This paper compares these metrics and determines a relationship between DLC and LSC with sensitivity analysis of this relationship. The shortcomings of these presently-available metrics are discussed with a new metric proposed - the dynamic load equalisation (DLE) measure.
Resumo:
The effect of conversion from forest-to-pasture upon soil carbon stocks has been intensively discussed, but few studies focus on how this land-use change affects carbon (C) distribution across soil fractions in the Amazon basin. We investigated this in the 20 cm depth along a chronosequence of sites from native forest to three successively older pastures. We performed a physicochemical fractionation of bulk soil samples to better understand the mechanisms by which soil C is stabilized and evaluate the contribution of each C fraction to total soil C. Additionally, we used a two-pool model to estimate the mean residence time (MRT) for the slow and active pool C in each fraction. Soil C increased with conversion from forest-to-pasture in the particulate organic matter (> 250 mu m), microaggregate (53-250 mu m), and d-clay (< 2 mu m) fractions. The microaggregate comprised the highest soil C content after the conversion from forest-to-pasture. The C content of the d-silt fraction decreased with time since conversion to pasture. Forest-derived C remained in all fractions with the highest concentration in the finest fractions, with the largest proportion of forest-derived soil C associated with clay minerals. Results from this work indicate that microaggregate formation is sensitive to changes in management and might serve as an indicator for management-induced soil carbon changes, and the soil C changes in the fractions are dependent on soil texture.
Resumo:
This paper presents a general, global approach to the problem of robot exploration, utilizing a topological data structure to guide an underlying Simultaneous Localization and Mapping (SLAM) process. A Gap Navigation Tree (GNT) is used to motivate global target selection and occluded regions of the environment (called “gaps”) are tracked probabilistically. The process of map construction and the motion of the vehicle alters both the shape and location of these regions. The use of online mapping is shown to reduce the difficulties in implementing the GNT.
Resumo:
Effective enterprise information security policy management requires review and assessment activities to ensure information security policies are aligned with business goals and objectives. As security policy management involves the elements of policy development process and the security policy as output, the context for security policy assessment requires goal-based metrics for these two elements. However, the current security management assessment methods only provide checklist types of assessment that are predefined by industry best practices and do not allow for developing specific goal-based metrics. Utilizing theories drawn from literature, this paper proposes the Enterprise Information Security Policy Assessment approach that expands on the Goal-Question-Metric (GQM) approach. The proposed assessment approach is then applied in a case scenario example to illustrate a practical application. It is shown that the proposed framework addresses the requirement for developing assessment metrics and allows for the concurrent undertaking of process-based and product-based assessment. Recommendations for further research activities include the conduct of empirical research to validate the propositions and the practical application of the proposed assessment approach in case studies to provide opportunities to introduce further enhancements to the approach.
Resumo:
A theoretical basis is required for comparing key features and critical elements in wild fisheries and aquaculture supply chains under a changing climate. Here we develop a new quantitative metric that is analogous to indices used to analyse food-webs and identify key species. The Supply Chain Index (SCI) identifies critical elements as those elements with large throughput rates, as well as greater connectivity. The sum of the scores for a supply chain provides a single metric that roughly captures both the resilience and connectedness of a supply chain. Standardised scores can facilitate cross-comparisons both under current conditions as well as under a changing climate. Identification of key elements along the supply chain may assist in informing adaptation strategies to reduce anticipated future risks posed by climate change. The SCI also provides information on the relative stability of different supply chains based on whether there is a fairly even spread in the individual scores of the top few key elements, compared with a more critical dependence on a few key individual supply chain elements. We use as a case study the Australian southern rock lobster Jasus edwardsii fishery, which is challenged by a number of climate change drivers such as impacts on recruitment and growth due to changes in large-scale and local oceanographic features. The SCI identifies airports, processors and Chinese consumers as the key elements in the lobster supply chain that merit attention to enhance stability and potentially enable growth. We also apply the index to an additional four real-world Australian commercial fishery and two aquaculture industry supply chains to highlight the utility of a systematic method for describing supply chains. Overall, our simple methodological approach to empirically-based supply chain research provides an objective method for comparing the resilience of supply chains and highlighting components that may be critical.
Resumo:
This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
Construction is an information intensive industry in which the accuracy and timeliness of information is paramount. It observed that the main communication issue in construction is to provide a method to exchange data between the site operation, the site office and the head office. The information needs under consideration are time critical to assist in maintaining or improving the efficiency at the jobsite. Without appropriate computing support this may increase the difficulty of problem solving. Many researchers focus their research on the usage of mobile computing devices in the construction industry and they believe that mobile computers have the potential to solve some construction problems that leads to reduce overall productivity. However, to date very limited observation has been conducted in terms of the deployment of mobile computers for construction workers on-site. By providing field workers with accurate, reliable and timely information at the location where it is needed, it will support the effectiveness and efficiency at the job site. Bringing a new technology into construction industry is not only need a better understanding of the application, but also need a proper preparation of the allocation of the resources such as people, and investment. With this in mind, an accurate analysis is needed to provide clearly idea of the overall costs and benefits of the new technology. A cost benefit analysis is a method of evaluating the relative merits of a proposed investment project in order to achieve efficient allocation of resources. It is a way of identifying, portraying and assessing the factors which need to be considered in making rational economic choices. In principle, a cost benefit analysis is a rigorous, quantitative and data-intensive procedure, which requires identification all potential effects, categorisation of these effects as costs and benefits, quantitative estimation of the extent of each cost and benefit associated with an action, translation of these into a common metric such as dollars, discounting of future costs and benefits into the terms of a given year, and summary of all cost and benefit to see which is greater. Even though many cost benefit analysis methodologies are available for a general assessment, there is no specific methodology can be applied for analysing the cost and benefit of the application of mobile computing devices in the construction site. Hence, the proposed methodology in this document is predominantly adapted from Baker et al. (2000), Department of Finance (1995), and Office of Investment Management (2005). The methodology is divided into four main stages and then detailed into ten steps. The methodology is provided for the CRC CI 2002-057-C Project: Enabling Team Collaboration with Pervasive and Mobile Computing and can be seen in detail in Section 3.
Resumo:
Using self authorship as a theoretical framework, this chapter examines the relationship between personal epistemology and beliefs about children’s learning for students studying to be child care workers in Australia. Scenario-based interviews were used to investigate how students’ views of knowledge, identity and relationships with others were related to beliefs about how children learn. Implications for vocational education are discussed.
Resumo:
Purpose To assess the repeatability and validity of lens densitometry derived from the Pentacam Scheimpflug imaging system. Setting Eye Clinic, Queensland University of Technology, Brisbane, Australia. Methods This prospective cross-sectional study evaluated 1 eye of subjects with or without cataract. Scheimpflug measurements and slitlamp and retroillumination photographs were taken through a dilated pupil. Lenses were graded with the Lens Opacities Classification System III. Intraobserver and interobserver reliability of 3 observers performing 3 repeated Scheimpflug lens densitometry measurements each was assessed. Three lens densitometry metrics were evaluated: linear, for which a line was drawn through the visual axis and a mean lens densitometry value given; peak, which is the point at which lens densitometry is greatest on the densitogram; 3-dimensional (3D), in which a fixed, circular 3.0 mm area of the lens is selected and a mean lens densitometry value given. Bland and Altman analysis of repeatability for multiple measures was applied; results were reported as the repeatability coefficient and relative repeatability (RR). Results Twenty eyes were evaluated. Repeatability was high. Overall, interobserver repeatability was marginally lower than intraobserver repeatability. The peak was the least reliable metric (RR 37.31%) and 3D, the most reliable (RR 5.88%). Intraobserver and interobserver lens densitometry values in the cataract group were slightly less repeatable than in the noncataract group. Conclusion The intraobserver and interobserver repeatability of Scheimpflug lens densitometry was high in eyes with cataract and eyes without cataract, which supports the use of automated lens density scoring using the Scheimpflug system evaluated in the study
Resumo:
The social tags in web 2.0 are becoming another important information source to profile users' interests and preferences for making personalized recommendations. However, the uncontrolled vocabulary causes a lot of problems to profile users accurately, such as ambiguity, synonyms, misspelling, low information sharing etc. To solve these problems, this paper proposes to use popular tags to represent the actual topics of tags, the content of items, and also the topic interests of users. A novel user profiling approach is proposed in this paper that first identifies popular tags, then represents users’ original tags using the popular tags, finally generates users’ topic interests based on the popular tags. A collaborative filtering based recommender system has been developed that builds the user profile using the proposed approach. The user profile generated using the proposed approach can represent user interests more accurately and the information sharing among users in the profile is also increased. Consequently the neighborhood of a user, which plays a crucial role in collaborative filtering based recommenders, can be much more accurately determined. The experimental results based on real world data obtained from Amazon.com show that the proposed approach outperforms other approaches.