982 resultados para Artificial Information Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Location Models are usedfor planning the location of multiple service centers in order to serve a geographicallydistributed population. A cornerstone of such models is the measure of distancebetween the service center and a set of demand points, viz, the location of thepopulation (customers, pupils, patients and so on). Theoretical as well asempirical evidence support the current practice of using the Euclidian distancein metropolitan areas. In this paper, we argue and provide empirical evidencethat such a measure is misleading once the Location Models are applied to ruralareas with heterogeneous transport networks. This paper stems from the problemof finding an optimal allocation of a pre-specified number of hospitals in alarge Swedish region with a low population density. We conclude that the Euclidianand the network distances based on a homogenous network (equal travel costs inthe whole network) give approximately the same optimums. However networkdistances calculated from a heterogeneous network (different travel costs indifferent parts of the network) give widely different optimums when the numberof hospitals increases.  In terms ofaccessibility we find that the recent closure of hospitals and the in-optimallocation of the remaining ones has increased the average travel distance by 75%for the population. Finally, aggregation the population misplaces the hospitalsby on average 10 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new version of the hglm package for fittinghierarchical generalized linear models (HGLM) with spatially correlated random effects. A CAR family for conditional autoregressive random effects was implemented. Eigen decomposition of the matrix describing the spatial structure (e.g. the neighborhood matrix) was used to transform the CAR random effectsinto an independent, but heteroscedastic, gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR model.This gives a computationally efficient algorithm for moderately sized problems (e.g. n<5000).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate speed prediction is a crucial step in the development of a dynamic vehcile activated sign (VAS). A previous study showed that the optimal trigger speed of such signs will need to be pre-determined according to the nature of the site and to the traffic conditions. The objective of this paper is to find an accurate predictive model based on historical traffic speed data to derive the optimal trigger speed for such signs. Adaptive neuro fuzzy (ANFIS), classification and regression tree (CART) and random forest (RF) were developed to predict one step ahead speed during all times of the day. The developed models were evaluated and compared to the results obtained from artificial neural network (ANN), multiple linear regression (MLR) and naïve prediction using traffic speed data collected at four sites located in Sweden. The data were aggregated into two periods, a short term period (5-min) and a long term period (1-hour). The results of this study showed that using RF is a promising method for predicting mean speed in the two proposed periods.. It is concluded that in terms of performance and computational complexity, a simplistic input features to the predicitive model gave a marked increase in the response time of the model whilse still delivering a low prediction error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial neural networks (ANN) are increasingly used to solve many problems related to pattern recognition and object classification. In this paper, we report on a study using artificial neural networks to classify two kinds of animal fibers: merino and mohair. We have developed two different models, one extracting nine scale parameters with image processing, and the other using an unsupervised artificial neural network to extract features automatically, which are determined in accordance with the complexity of the scale structure and the accuracy of the model. Although the first model can achieve higher accuracy, it requires more effort for image processing and more prior knowledge, since the accuracy of the ANN largely depends on the parameters selected. The second model is more robust than the first, since only raw images are used. Because only ordinary optical images taken with a microscope are employed, we can use the approach for many textile applications without expensive equipment such as scanning electron microscopy.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The output of the sheet metal forming process is subject to much variation. This paper develops a method to measure shape variation in channel forming and relate this back to the corresponding process parameter levels of the manufacturing set-up to create an inverse model. The shape variation in the channels is measured using a modified form of the point distribution model (also known as the active shape model). This means that channels can be represented by a weighting vector of minimal linear dimension that contains all the shape variation information from the average formed channel.

The inverse models were created using classifiers that related the weighting vectors to the process parameter levels for the blank holder force (BHF), die radii (DR) and tool gap (TG) of the parameters. Several classifiers were tested: linear, quadratic Gaussian and artificial neural networks. The quadratic Gaussian classifiers were the most accurate and the most consistent type of classifier over all the parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

China's path to the development of a modern securities market has not been a smooth one. This article argues that efforts to impose Western securities market models on China have been fraught with difficulty. This is especially clear from the adoption of information disclosure principles and practices. While the integrity of disclosure practices is a fundamental element in maintaining investors' confidence in securities markets, disclosure practices need to be attuned to China '5 systemic features, especially in regard to its legal structure and rules. Market failures, such as the collapse of Enron in the United States, have led to a realisation that US disclosure models have their own difficulties and that these should not be uncritically used. This article reviews recent Chinese law andpractice (using the Yinguangxia false disclosure scandal as an example) in this area and calls for the adoption of a more critical approach towards the use of Western models with particular regard to China's own distinctive pathways of reform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a given fiber spun to pre-determined yarn specifications, the spinning performance of the yarn usually varies from mill to mill. For this reason, it is necessary to develop an empirical model that can encompass all known processing variables that exist in different spinning mills, and then generalize this information and be able to accurately predict yarn quality for an individual mill. This paper reports a method for predicting worsted spinning performance with an artificial neural network (ANN) trained with backpropagation. The applicability of artificial neural networks for predicting spinning performance is first evaluated against a well established prediction and benchmarking tool (Sirolan YarnspecTM). The ANN is then subsequently trained with commercial mill data to assess the feasibility of the method as a mill-specific performance prediction tool. Incorporating mill-specific data results in an improved fit to the commercial mill data set, suggesting that the proposed method has the ability to predict the spinning performance of a specific mill accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mount Buffalo National Park is the oldest national park in Victoria, Australia. There has been a rapid increase in the number of visitors to the park during the last decade and park management has been a concern, especially in the light of declining budgetary allocations and potential damage due to the increased visitor numbers. Policy options to increase park revenue remain unclear because of a lack of information on demand parameters and user costs. This study estimates the economic value of the park using the travel cost method (TCM) and the contingent valuation method (CVM). The TCM gives higher consumer surplus (CS) than the CVM. The CS shows that the economic value of the park is high and that there are opportunities to introduce innovative fee schemes to enhance its revenue. Present entry fee systems do not capture the economic value of the park.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With information warfare (IW) becoming a reality, the need for a new security methodology to deal with the new and unique attack threats and vulnerabilities associated with the new information technology security paradigm. With the shift from computer security to information warfare, logical transformation models (LTMS) were looked at as a solution to quantifying information system requirements. The paper will introduce the concepts involved with fourth generational models and it's application to IW. The basic advantages and disadvantages will also be discussed and presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discovering a precise causal structure accurately reflecting the given data is one of the most essential tasks in the area of data mining and machine learning. One of the successful causal discovery approaches is the information-theoretic approach using the Minimum Message Length Principle[19]. This paper presents an improved and further experimental results of the MML discovery algorithm. We introduced a new encoding scheme for measuring the cost of describing the causal structure. Stiring function is also applied to further simplify the computational complexity and thus works more efficiently. The experimental results of the current version of the discovery system show that: (1) the current version is capable of discovering what discovered by previous system; (2) current system is capable of discovering more complicated causal models with large number of variables; (3) the new version works more efficiently compared with the previous version in terms of time complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper argues that the nature of IS research that deals with indigenous culture must be informed as much by context as it is by culture models, which has been the focus of such research in the past. This is considered important because it better reflects the meaning of the data collected for the researcher. To appreciate the importance of context this papers also argues that research subjects from designated individualist societies will inform the researcher in different ways from those subjects located in collectivist societies. To illustrate the practical implications of this argument the paper reports three separate case studies in IS research where the researchers reflect on the impact that a collectivist view has had on the research findings. The paper suggests that (1) similar ethnicity and appearance are significant in gaining the trust of subjects in a collectivist society; that is the researcher is part of the in-group as they belong to the same culture or ethnic group; that (2) who introduced the researcher to the subject is significant in that trust is best reflected when a member of the group/collective plays an important role in the research process itself; and that (3) an ability to (a) communicate in the natural language and (b) understand the implicit body language and (c) cultural codes is important in gaining significant and more meaningful research outcomes. This is enabled via the implicit meanings embedded in members of the collectivist society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demolition of building structures produces enormous amounts of waste materials. In most current demolition projects, a great number of demolished materials are directly sent to landfill after their primary usage due to the difficulties in finding their next usage immediately. At the same time, because of limited supply of second-hand materials, new and high quality materials are used in construction projects whose design standards can be fitted using the secondary or used materials. However, this is an inefficient method to reduce waste because off the flow nature of the current waste-exchange systems and the demolition procedure. The recent concept using deconstruction rather than destruction for demolishing a constructed facility fails to achieve widespread understanding or acceptance due to various practical limitations. In this paper, for the purpose of envisaging the deconstruction implementations in practice and promoting cascading usages of construction materials, the concept of electronic demolition (e-Demotion, eDemolition) is put forward for the first time. E-demolition is a virtual demolition approach by which the demolition information, progress and outputs are operated before the physical demolition. Furthermore, the authors set up the essential models to implement electronic demolition of buildings from the viewpoints of demolition progress, business, and information. Each model is demonstrated in accord with the conventional demolition practice and subject to the ideal deconstruction implementation. Following the electronic demolition of a real project, the physical demolition can be anticipated with a minimum of construction waste emission.