986 resultados para Model consistency
Resumo:
This raster layer represents surface elevation and bathymetry data for the Boston Region, Massachusetts. It was created by merging portions of MassGIS Digital Elevation Model 1:5,000 (2005) data with NOAA Estuarine Bathymetric Digital Elevation Models (30 m.) (1998). DEM data was derived from the digital terrain models that were produced as part of the MassGIS 1:5,000 Black and White Digital Orthophoto imagery project. Cellsize is 5 meters by 5 meters. Each cell has a floating point value, in meters, which represents its elevation above or below sea level.
Resumo:
The interplay between two perspectives that have recently been applied in the attitude area-the social identity approach to attitude-behaviour relations (Terry & Hogg, 1996) and the MODE model (Fazio, 1990a)-was examined in the present research. Two experimental studies were conducted to examine the role of group norms, group identification, attitude accessibility, and mode of behavioural decision-making in the attitude-behaviour relationship. In Study I (N = 211), the effects of norms and identification on attitude-behaviour consistency as a function of attitude accessibility and mood were investigated. Study 2 (N = 354) replicated and extended the first experiment by using time pressure to manipulate mode of behavioural decision-making. As expected, the effects of norm congruency varied as a function of identification and mode of behavioural decision-making. Under conditions assumed to promote deliberative processing (neutral mood/low time pressure), high identifiers behaved in a manner consistent with the norm. No effects emerged under positive mood and high time pressure conditions. In Study 2, there was evidence that exposure to an attitude-incongruent norm resulted in attitude change only under low accessibility conditions. The results of these studies highlight the powerful role of group norms in directing individual behaviour and suggest limited support for the MODE model in this context. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Twelve dairy heifers were used to examine the clinical response of an alimentary oligofructose overload. Six animals were divided into 3 subgroups, and each was given a bolus dose of 13, 17, or 21 g/kg of oligofructose orally. The control group (n = 6) was sham-treated with tap water. Signs of lameness, cardiovascular function, and gastrointestinal function were monitored every 6 h during development of rumen acidosis. The heifers were euthanized 48 and 72 h after administration of oligofructose. All animals given oligofructose developed depression, anorexia, and diarrhea 9 to 39 h after receiving oligofructose. By 33 to 45 h after treatment, the feces returned to normal consistency and the heifers began eating again. Animals given oligofructose developed transient fever, severe metabolic acidosis, and moderate dehydration, which were alleviated by supportive therapy. Four of 6 animals given oligofructose displayed clinical signs of laminitis starting 39 to 45 h after receiving oligofructose and lasting until euthanasia. The lameness was obvious, but could easily be overlooked by the untrained eye, because the heifers continued to stand and walk, and did not interrupt their eating behavior. No positive pain reactions or lameness were seen in control animals. Based on these results, we conclude that an alimentary oligofructose overload is able to induce signs of acute laminitis in cattle. This model offers a new method, which can be used in further investigation of the pathogenesis and pathophysiology of bovine laminitis.
Resumo:
Background: The epidemiology of a disease describes numbers of people becoming incident, being prevalent, recovering, surviving, and dying from the disease or from other causes. As a matter of accounting principle, the inflow, stock, and outflows must be compatible, and if we could observe completely every person involved, the epidemiologic estimates describing the disease would be consistent. Lack of consistency is an indicator for possible measurement error. Methods: We examined the consistency of estimates of incidence, prevalence, and excess mortality of dementia from the Rotterdam Study. We used the incidence and excess mortality estimates to calculate with a mathematical disease model a predicted prevalence, and compared the predicted to the observed prevalence. Results: Predicted prevalence is in most age groups lower than observed, and the difference between them is significant for some age groups. Conclusions: The observed discrepancy could be due to overestimates of prevalence or excess mortality, or an underestimate of incidence, or a combination of all three. We conclude from an analysis of possible causes that it is not possible to say which contributes most to the discrepancy. Estimating dementia incidence in an aging cohort presents a dilemma: with a short follow-up border-line incident cases are easily missed, and with longer follow-up measurement problems increase due to the associated aging of the cohort. Checking for consistency is a useful strategy to signal possible measurement error, but some sources of error may be impossible to avoid.
Resumo:
We discuss how integrity consistency constraints between different UML models can be precisely defined at a language level. In doing so, we introduce a formal object-oriented metamodeling approach. In the approach, integrity consistency constraints between UML models are defined in terms of invariants of the UML model elements used to define the models at the language-level. Adopting a formal approach, constraints are formally defined using Object-Z. We demonstrate how integrity consistency constraints for UML models can be precisely defined at the language-level and once completed, the formal description of the consistency constraints will be a precise reference of checking consistency of UML models as well as for tool development.
Resumo:
Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.
Resumo:
We suggest a variant of the nonlinear σ model for the description of disordered superconductors. The main distinction from existing models lies in the fact that the saddle point equation is solved nonperturbatively in the superconducting pairing field. It allows one to use the model both in the vicinity of the metal-superconductor transition and well below its critical temperature with full account for the self-consistency conditions. We show that the model reproduces a set of known results in different limiting cases, and apply it for a self-consistent description of the proximity effect at the superconductor-metal interface.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
This study investigated the utility of the Story Model for decision making at the jury level by examining the influence of evidence order and deliberation style on story consistency and guilt. Participants were shown a video-taped trial stimulus and then provided case perceptions including a guilt judgment and a narrative about what occurred during the incident. Participants then deliberated for approximately thirty minutes using either an evidence-driven or verdict-driven deliberation style before again providing case perceptions, including a guilt determination, a narrative about what happened during the incident, and an evidence recognition test. Multi-level regression analyses revealed that evidence order, deliberation style and sample interacted to influence both story consistency measures and guilt. Among students, participants in the verdict-driven deliberation condition formed more consistent pro-prosecution stories when the prosecution presented their case in story-order, while participants in the evidence-driven deliberation condition formed more consistent pro-prosecution stories when the defense's case was presented in story-order. Findings were the opposite among community members, with participants in the verdict-driven deliberation condition forming more consistent pro-prosecution stories when the defense's case was presented in story-order, and participants in the evidence-driven deliberation condition forming more consistent pro-prosecution stories when the prosecution's case was presented in story-order. Additionally several story consistency measures influenced guilt decisions. Thus there is some support for the hypothesis that story consistency mediates the influence of evidence order and deliberation style on guilt decisions.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
The architectural transcription factor HMGA2 is abundantly expressed during embryonic development. In several malignant neoplasias including prostate cancer, high re-expression of HMGA2 is correlated with malignancy and poor prognosis. The let-7 miRNA family is described to regulate HMGA2 negatively. The balance of let-7 and HMGA2 is discussed to play a major role in tumour aetiology. To further analyse the role of HMGA2 in prostate cancer a stable and highly reproducible in vitro model system is precondition. Herein we established a canine CT1258-EGFP-HMGA2 prostate cancer cell line stably overexpressing HMGA2 linked to EGFP and in addition the reference cell line CT1258-EGFP expressing solely EGFP to exclude EGFP-induced effects. Both recombinant cell lines were characterised by fluorescence microscopy, flow cytometry and immunocytochemistry. The proliferative effect of ectopically overexpressed HMGA2 was determined via BrdU assays. Comparative karyotyping of the derived and the initial CT1258 cell lines was performed to analyse chromosome consistency. The impact of the ectopic HMGA2 expression on its regulator let-7a was analysed by quantitative real-time PCR. Fluorescence microscopy and immunocytochemistry detected successful expression of the EGFP-HMGA2 fusion protein exclusively accumulating in the nucleus. Gene expression analyses confirmed HMGA2 overexpression in CT1258-EGFP-HMGA2 in comparison to CT1258-EGFP and native cells. Significantly higher let-7a expression levels were found in CT1258-EGFP-HMGA2 and CT1258-EGFP. The BrdU assays detected an increased proliferation of CT1258-HMGA2-EGFP cells compared to CT1258-EGFP and native CT1258. The cytogenetic analyses of CT1258-EGFP and CT1258-EGFP-HMGA2 resulted in a comparable hyperdiploid karyotype as described for native CT1258 cells. To further investigate the impact of recombinant overexpressed HMGA2 on CT1258 cells, other selected targets described to underlie HMGA2 regulation were screened in addition. The new fluorescent CT1258-EGFP-HMGA2 cell line is a stable tool enabling in vitro and in vivo analyses of the HMGA2-mediated effects on cells and the development and pathogenesis of prostate cancer.
Resumo:
This paper explores the effect of using regional data for livestock attributes on estimation of greenhouse gas (GHG) emissions for the northern beef industry in Australia, compared with using state/territory-wide values, as currently used in Australia’s national GHG inventory report. Regional GHG emissions associated with beef production are reported for 21 defined agricultural statistical regions within state/territory jurisdictions. A management scenario for reduced emissions that could qualify as an Emissions Reduction Fund (ERF) project was used to illustrate the effect of regional level model parameters on estimated abatement levels. Using regional parameters, instead of state level parameters, for liveweight (LW), LW gain and proportion of cows lactating and an expanded number of livestock classes, gives a 5.2% reduction in estimated emissions (range +12% to –34% across regions). Estimated GHG emissions intensity (emissions per kilogram of LW sold) varied across the regions by up to 2.5-fold, ranging from 10.5 kg CO2-e kg–1 LW sold for Darling Downs, Queensland, through to 25.8 kg CO2-e kg–1 LW sold for the Pindan and North Kimberley, Western Australia. This range was driven by differences in production efficiency, reproduction rate, growth rate and survival. This suggests that some regions in northern Australia are likely to have substantial opportunities for GHG abatement and higher livestock income. However, this must be coupled with the availability of management activities that can be implemented to improve production efficiency; wet season phosphorus (P) supplementation being one such practice. An ERF case study comparison showed that P supplementation of a typical-sized herd produced an estimated reduction of 622 t CO2-e year–1, or 7%, compared with a non-P supplemented herd. However, the different model parameters used by the National Inventory Report and ERF project means that there was an anomaly between the herd emissions for project cattle excised from the national accounts (13 479 t CO2-e year–1) and the baseline herd emissions estimated for the ERF project (8 896 t CO2-e year–1) before P supplementation was implemented. Regionalising livestock model parameters in both ERF projects and the national accounts offers the attraction of being able to more easily and accurately reflect emissions savings from this type of emissions reduction project in Australia’s national GHG accounts.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
We provide a nonparametric 'revealed preference’ characterization of rational household behavior in terms of the collective consumption model, while accounting for general (possibly non-convex) individual preferences. We establish a Collective Axiom of Revealed Preference (CARP), which provides a necessary and sufficient condition for data consistency with collective rationality. Our main result takes the form of a ‘collective’ version of the Afriat Theorem for rational behavior in terms of the unitary model. This theorem has some interesting implications. With only a finite set of observations, the nature of consumption externalities (positive or negative) in the intra-household allocation process is non-testable. The same non-testability conclusion holds for privateness (with or without externalities) or publicness of consumption. By contrast, concavity of individual utility functions (representing convex preferences) turns out to be testable. In addition, monotonicity is testable for the model that assumes all household consumption is public.
Resumo:
Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.