965 resultados para replicazione ottimistica, eventual consistency
Resumo:
Most authors assume that the natural behaviour of the decision-maker is being inconsistent. This paper investigates the main sources of inconsistency and analyses methods for reducing or eliminating inconsistency. Decision support systems can contain interactive modules for that purpose. In a system with consistency control, there are three stages. First, consistency should be checked: a consistency measure is needed. Secondly, approval or rejection has to be decided: a threshold value of inconsistency measure is needed. Finally, if inconsistency is ‘high’, corrections have to be made: an inconsistency reducing method is needed. This paper reviews the difficulties in all stages. An entirely different approach is to elaborate a decision support system in order to force the decision-maker to give consistent values in each step of answering pair-wise comparison questions. An interactive questioning procedure resulting in consistent (sub) matrices has been demonstrated.
Resumo:
A cikk a páros összehasonlításokon alapuló pontozási eljárásokat tárgyalja axiomatikus megközelítésben. A szakirodalomban számos értékelő függvényt javasoltak erre a célra, néhány karakterizációs eredmény is ismert. Ennek ellenére a megfelelő módszer kiválasztása nem egy-szerű feladat, a különböző tulajdonságok bevezetése elsősorban ebben nyújthat segítséget. Itt az összehasonlított objektumok teljesítményén érvényesülő monotonitást tárgyaljuk az önkonzisztencia és önkonzisztens monotonitás axiómákból kiindulva. Bemutatásra kerülnek lehetséges gyengítéseik és kiterjesztéseik, illetve egy, az irreleváns összehasonlításoktól való függetlenséggel kapcsolatos lehetetlenségi tétel is. A tulajdonságok teljesülését három eljárásra, a klasszikus pontszám eljárásra, az ezt továbbfejlesztő általánosított sorösszegre és a legkisebb négyzetek módszerére vizsgáljuk meg, melyek mindegyike egy lineáris egyenletrendszer megoldásaként számítható. A kapott eredmények új szempontokkal gazdagítják a pontozási eljárás megválasztásának kérdését. _____ The paper provides an axiomatic analysis of some scoring procedures based on paired comparisons. Several methods have been proposed for these generalized tournaments, some of them have been also characterized by a set of properties. The choice of an appropriate method is supported by a discussion of their theoretical properties. In the paper we focus on the connections of self-consistency and self-consistent-monotonicity, two axioms based on the comparisons of object's performance. The contradiction of self-consistency and independence of irrel-evant matches is revealed, as well as some possible reductions and extensions of these properties. Their satisfiability is examined through three scoring procedures, the score, generalised row sum and least squares methods, each of them is calculated as a solution of a system of linear equations. Our results contribute to the problem of finding a proper paired comparison based scoring method.
Resumo:
This study examined the effect of schemas on consistency and accuracy of memory across interviews, providing theoretical hypotheses explaining why inconsistencies may occur. The design manipulated schema-typicality of items (schema-typical and atypical), question format (free-recall, cued-recall and recognition) and retention interval (immediate/2 week and 2 week/4 week). Consistency, accuracy and experiential quality of memory were measured. ^ All independent variables affected accuracy and experiential quality of memory while question format was the only variable affecting consistency. These results challenge the commonly held notion in the legal arena that consistency is a proxy for accuracy. The study also demonstrates that other variables, such as item-typicality and retention interval have different effects on consistency and accuracy in memory. ^
Resumo:
Acknowledgments The authors wish to thank the crews, fishermen and scientists who conducted the various surveys from which data were obtained, and Mark Belchier and Simeon Hill for their contributions. This work was supported by the Government of South Georgia and South Sandwich Islands. Additional logistical support provided by The South Atlantic Environmental Research Institute with thanks to Paul Brickle. Thanks to Stephen Smith of Fisheries and Oceans Canada (DFO) for help in constructing bootstrap confidence limits. Paul Fernandes receives funding from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland), and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. We also wish to thank two anonymous referees for their helpful suggestions on earlier versions of this manuscript.
Resumo:
“Eventual Benefits: Kristevan Readings of Female Subjectivity in Henry James’s Late Novels” examine la construction de la subjectivité féminine dans les romans de la phase majeure de Henry James, notamment What Maisie Knew, The Awkward Age, The Portrait of a Lady, The Wings of the Dove et The Golden Bowl. Les personnages féminins de James se trouvent souvent dans des circonstances sociales ou familiales qui défavorisent l’autonomie psychique, et ces subordinations sont surtout nuisibles pour les jeunes personnages de l’auteur. Quant aux femmes américaines expatriées de ces romans, elles éprouvent l’objectification sociale et pécuniaire des européens : en conséquence, elles déploient des tactiques contraires afin d’inverser leurs diminutions et instaurer leurs individualités. Ma recherche des protocoles qui subventionnent l’affranchissement de ces femmes procède dans le cadre des théories avancées par Julia Kristeva. En utilisant les postulats kristeviens d’abjection et de mélancolie, d’intertextualité, de maternité et de grossesse, du pardon et d’étrangeté, cette thèse explore les stratégies disparates et résistantes des femmes chez James et elle parvient à une conception de la subjectivité féminine comme un processus continuellement ajourné.
Resumo:
“Eventual Benefits: Kristevan Readings of Female Subjectivity in Henry James’s Late Novels” examine la construction de la subjectivité féminine dans les romans de la phase majeure de Henry James, notamment What Maisie Knew, The Awkward Age, The Portrait of a Lady, The Wings of the Dove et The Golden Bowl. Les personnages féminins de James se trouvent souvent dans des circonstances sociales ou familiales qui défavorisent l’autonomie psychique, et ces subordinations sont surtout nuisibles pour les jeunes personnages de l’auteur. Quant aux femmes américaines expatriées de ces romans, elles éprouvent l’objectification sociale et pécuniaire des européens : en conséquence, elles déploient des tactiques contraires afin d’inverser leurs diminutions et instaurer leurs individualités. Ma recherche des protocoles qui subventionnent l’affranchissement de ces femmes procède dans le cadre des théories avancées par Julia Kristeva. En utilisant les postulats kristeviens d’abjection et de mélancolie, d’intertextualité, de maternité et de grossesse, du pardon et d’étrangeté, cette thèse explore les stratégies disparates et résistantes des femmes chez James et elle parvient à une conception de la subjectivité féminine comme un processus continuellement ajourné.
Resumo:
In the past years, we could observe a significant amount of new robotic systems in science, industry, and everyday life. To reduce the complexity of these systems, the industry constructs robots that are designated for the execution of a specific task such as vacuum cleaning, autonomous driving, observation, or transportation operations. As a result, such robotic systems need to combine their capabilities to accomplish complex tasks that exceed the abilities of individual robots. However, to achieve emergent cooperative behavior, multi-robot systems require a decision process that copes with the communication challenges of the application domain. This work investigates a distributed multi-robot decision process, which addresses unreliable and transient communication. This process composed by five steps, which we embedded into the ALICA multi-agent coordination language guided by the PROViDE negotiation middleware. The first step encompasses the specification of the decision problem, which is an integral part of the ALICA implementation. In our decision process, we describe multi-robot problems by continuous nonlinear constraint satisfaction problems. The second step addresses the calculation of solution proposals for this problem specification. Here, we propose an efficient solution algorithm that integrates incomplete local search and interval propagation techniques into a satisfiability solver, which forms a satisfiability modulo theories (SMT) solver. In the third decision step, the PROViDE middleware replicates the solution proposals among the robots. This replication process is parameterized with a distribution method, which determines the consistency properties of the proposals. In a fourth step, we investigate the conflict resolution. Therefore, an acceptance method ensures that each robot supports one of the replicated proposals. As we integrated the conflict resolution into the replication process, a sound selection of the distribution and acceptance methods leads to an eventual convergence of the robot proposals. In order to avoid the execution of conflicting proposals, the last step comprises a decision method, which selects a proposal for implementation in case the conflict resolution fails. The evaluation of our work shows that the usage of incomplete solution techniques of the constraint satisfaction solver outperforms the runtime of other state-of-the-art approaches for many typical robotic problems. We further show by experimental setups and practical application in the RoboCup environment that our decision process is suitable for making quick decisions in the presence of packet loss and delay. Moreover, PROViDE requires less memory and bandwidth compared to other state-of-the-art middleware approaches.
Resumo:
Mode of access: Internet.
Resumo:
This paper explores the effect of using regional data for livestock attributes on estimation of greenhouse gas (GHG) emissions for the northern beef industry in Australia, compared with using state/territory-wide values, as currently used in Australia’s national GHG inventory report. Regional GHG emissions associated with beef production are reported for 21 defined agricultural statistical regions within state/territory jurisdictions. A management scenario for reduced emissions that could qualify as an Emissions Reduction Fund (ERF) project was used to illustrate the effect of regional level model parameters on estimated abatement levels. Using regional parameters, instead of state level parameters, for liveweight (LW), LW gain and proportion of cows lactating and an expanded number of livestock classes, gives a 5.2% reduction in estimated emissions (range +12% to –34% across regions). Estimated GHG emissions intensity (emissions per kilogram of LW sold) varied across the regions by up to 2.5-fold, ranging from 10.5 kg CO2-e kg–1 LW sold for Darling Downs, Queensland, through to 25.8 kg CO2-e kg–1 LW sold for the Pindan and North Kimberley, Western Australia. This range was driven by differences in production efficiency, reproduction rate, growth rate and survival. This suggests that some regions in northern Australia are likely to have substantial opportunities for GHG abatement and higher livestock income. However, this must be coupled with the availability of management activities that can be implemented to improve production efficiency; wet season phosphorus (P) supplementation being one such practice. An ERF case study comparison showed that P supplementation of a typical-sized herd produced an estimated reduction of 622 t CO2-e year–1, or 7%, compared with a non-P supplemented herd. However, the different model parameters used by the National Inventory Report and ERF project means that there was an anomaly between the herd emissions for project cattle excised from the national accounts (13 479 t CO2-e year–1) and the baseline herd emissions estimated for the ERF project (8 896 t CO2-e year–1) before P supplementation was implemented. Regionalising livestock model parameters in both ERF projects and the national accounts offers the attraction of being able to more easily and accurately reflect emissions savings from this type of emissions reduction project in Australia’s national GHG accounts.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)
Resumo:
Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.
Resumo:
Currently, the Division of Appeals and Hearings of the South Carolina Department of Health and Human Services has no specific presence on the agency's website to provide information or to allow for the electronic submission of appeals. This project's focus was developing an online presence for the Division on SCDHHS' website. The page will make the Division's procedures publicly available to beneficiaries, providers, and agency program staff who attend hearings. Additionally, parties will have a secure online portal through which they can file appeals and upload supporting documentation, reducing the need to send appeals via first class mail. The online appeal portal will further the agency' s goal of reducing paper.
Resumo:
Collecting ground truth data is an important step to be accomplished before performing a supervised classification. However, its quality depends on human, financial and time ressources. It is then important to apply a validation process to assess the reliability of the acquired data. In this study, agricultural infomation was collected in the Brazilian Amazonian State of Mato Grosso in order to map crop expansion based on MODIS EVI temporal profiles. The field work was carried out through interviews for the years 2005-2006 and 2006-2007. This work presents a methodology to validate the training data quality and determine the optimal sample to be used according to the classifier employed. The technique is based on the detection of outlier pixels for each class and is carried out by computing Mahalanobis distances for each pixel. The higher the distance, the further the pixel is from the class centre. Preliminary observations through variation coefficent validate the efficiency of the technique to detect outliers. Then, various subsamples are defined by applying different thresholds to exclude outlier pixels from the classification process. The classification results prove the robustness of the Maximum Likelihood and Spectral Angle Mapper classifiers. Indeed, those classifiers were insensitive to outlier exclusion. On the contrary, the decision tree classifier showed better results when deleting 7.5% of pixels in the training data. The technique managed to detect outliers for all classes. In this study, few outliers were present in the training data, so that the classification quality was not deeply affected by the outliers.