928 resultados para uniform storng consistency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the effect of using regional data for livestock attributes on estimation of greenhouse gas (GHG) emissions for the northern beef industry in Australia, compared with using state/territory-wide values, as currently used in Australia’s national GHG inventory report. Regional GHG emissions associated with beef production are reported for 21 defined agricultural statistical regions within state/territory jurisdictions. A management scenario for reduced emissions that could qualify as an Emissions Reduction Fund (ERF) project was used to illustrate the effect of regional level model parameters on estimated abatement levels. Using regional parameters, instead of state level parameters, for liveweight (LW), LW gain and proportion of cows lactating and an expanded number of livestock classes, gives a 5.2% reduction in estimated emissions (range +12% to –34% across regions). Estimated GHG emissions intensity (emissions per kilogram of LW sold) varied across the regions by up to 2.5-fold, ranging from 10.5 kg CO2-e kg–1 LW sold for Darling Downs, Queensland, through to 25.8 kg CO2-e kg–1 LW sold for the Pindan and North Kimberley, Western Australia. This range was driven by differences in production efficiency, reproduction rate, growth rate and survival. This suggests that some regions in northern Australia are likely to have substantial opportunities for GHG abatement and higher livestock income. However, this must be coupled with the availability of management activities that can be implemented to improve production efficiency; wet season phosphorus (P) supplementation being one such practice. An ERF case study comparison showed that P supplementation of a typical-sized herd produced an estimated reduction of 622 t CO2-e year–1, or 7%, compared with a non-P supplemented herd. However, the different model parameters used by the National Inventory Report and ERF project means that there was an anomaly between the herd emissions for project cattle excised from the national accounts (13 479 t CO2-e year–1) and the baseline herd emissions estimated for the ERF project (8 896 t CO2-e year–1) before P supplementation was implemented. Regionalising livestock model parameters in both ERF projects and the national accounts offers the attraction of being able to more easily and accurately reflect emissions savings from this type of emissions reduction project in Australia’s national GHG accounts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the primary objective on designing a structure is to support the external loads, the achievement of an optimal layout that reduces all costs associated with the structure is an aspect of increasing interest. The problem of finding the optimal layout for bridgelike structures subjected to a uniform load is considered. The problem is formulated following a theory on economy of frame structures, using the stress volume as the objective function and including the selection of appropriate values for statically indeterminate reactions. It is solved in a function space of finite dimension instead of using a general variational approach, obtaining near-optimal solutions. The results obtained with this profitable strategy are very close to the best layouts known to date, with differences of less than 2% for the stress volume, but with a simpler layout that can be recognized in some real bridges. This strategy could be a guide to preliminary design of bridges subject to a wide class of costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Each Circuit Public Defender Selection Panel is responsible for nominating a person to serve as the Circuit Public Defender for their circuit. This is a list of guidelines to follow in that selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 2007 Indigent Defense Act provides that each county must elect its representative(s) from the active licensed attorneys who reside within each county to serve on its Circuit Public Defender Selection Panel. The procedures included in this document have been adopted by the Commission on Indigent Defense for the Election of the Circuit Public Defender Selection Panels and the Nomination of Circuit Public Defenders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, the Division of Appeals and Hearings of the South Carolina Department of Health and Human Services has no specific presence on the agency's website to provide information or to allow for the electronic submission of appeals. This project's focus was developing an online presence for the Division on SCDHHS' website. The page will make the Division's procedures publicly available to beneficiaries, providers, and agency program staff who attend hearings. Additionally, parties will have a secure online portal through which they can file appeals and upload supporting documentation, reducing the need to send appeals via first class mail. The online appeal portal will further the agency' s goal of reducing paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collecting ground truth data is an important step to be accomplished before performing a supervised classification. However, its quality depends on human, financial and time ressources. It is then important to apply a validation process to assess the reliability of the acquired data. In this study, agricultural infomation was collected in the Brazilian Amazonian State of Mato Grosso in order to map crop expansion based on MODIS EVI temporal profiles. The field work was carried out through interviews for the years 2005-2006 and 2006-2007. This work presents a methodology to validate the training data quality and determine the optimal sample to be used according to the classifier employed. The technique is based on the detection of outlier pixels for each class and is carried out by computing Mahalanobis distances for each pixel. The higher the distance, the further the pixel is from the class centre. Preliminary observations through variation coefficent validate the efficiency of the technique to detect outliers. Then, various subsamples are defined by applying different thresholds to exclude outlier pixels from the classification process. The classification results prove the robustness of the Maximum Likelihood and Spectral Angle Mapper classifiers. Indeed, those classifiers were insensitive to outlier exclusion. On the contrary, the decision tree classifier showed better results when deleting 7.5% of pixels in the training data. The technique managed to detect outliers for all classes. In this study, few outliers were present in the training data, so that the classification quality was not deeply affected by the outliers.