922 resultados para Traditional enrichment method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, an enriched radial point interpolation method (e-RPIM) is developed the for the determination of crack tip fields. In e-RPIM, the conventional RBF interpolation is novelly augmented by the suitable trigonometric basis functions to reflect the properties of stresses for the crack tip fields. The performance of the enriched RBF meshfree shape functions is firstly investigated to fit different surfaces. The surface fitting results have proven that, comparing with the conventional RBF shape function, the enriched RBF shape function has: (1) a similar accuracy to fit a polynomial surface; (2) a much better accuracy to fit a trigonometric surface; and (3) a similar interpolation stability without increase of the condition number of the RBF interpolation matrix. Therefore, it has proven that the enriched RBF shape function will not only possess all advantages of the conventional RBF shape function, but also can accurately reflect the properties of stresses for the crack tip fields. The system of equations for the crack analysis is then derived based on the enriched RBF meshfree shape function and the meshfree weak-form. Several problems of linear fracture mechanics are simulated using this newlydeveloped e-RPIM method. It has demonstrated that the present e-RPIM is very accurate and stable, and it has a good potential to develop a practical simulation tool for fracture mechanics problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Alcohol-related implicit (preconscious) cognitive processes are established and unique predictors of alcohol use, but most research in this area has focused on alcohol-related implicit cognition and anxiety. This study extends this work into the area of depressed mood by testing a cognitive model that combines traditional explicit (conscious and considered) beliefs, implicit alcohol-related memory associations (AMAs), and self-reported drinking behavior. Method Using a sample of 106 university students, depressed mood was manipulated using a musical mood induction procedure immediately prior to completion of implicit then explicit alcohol-related cognition measures. A bootstrapped two-group (weak/strong expectancies of negative affect and tension reduction) structural equation model was used to examine how mood changes and alcohol-related memory associations varied across groups. Results Expectancies of negative affect moderated the association of depressed mood and AMAs, but there was no such association for tension reduction expectancy. Conclusion Subtle mood changes may unconsciously trigger alcohol-related memories in vulnerable individuals. Results have implications for addressing subtle fluctuations in depressed mood among young adults at risk of alcohol problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent advance in biosecurity surveillance design aims to benefit island conservation through early and improved detection of incursions by non-indigenous species. The novel aspects of the design are that it achieves a specified power of detection in a cost-managed system, while acknowledging heterogeneity of risk in the study area and stratifying the area to target surveillance deployment. The design also utilises a variety of surveillance system components, such as formal scientific surveys, trapping methods, and incidental sightings by non-biologist observers. These advances in design were applied to black rats (Rattus rattus) representing the group of invasive rats including R. norvegicus, and R. exulans, which are potential threats to Barrow Island, Australia, a high value conservation nature reserve where a proposed liquefied natural gas development is a potential source of incursions. Rats are important to consider as they are prevalent invaders worldwide, difficult to detect early when present in low numbers, and able to spread and establish relatively quickly after arrival. The ‘exemplar’ design for the black rat is then applied in a manner that enables the detection of a range of non-indigenous species of rat that could potentially be introduced. Many of the design decisions were based on expert opinion as data gaps exist in empirical data. The surveillance system was able to take into account factors such as collateral effects on native species, the availability of limited resources on an offshore island, financial costs, demands on expertise and other logistical constraints. We demonstrate the flexibility and robustness of the surveillance system and discuss how it could be updated as empirical data are collected to supplement expert opinion and provide a basis for adaptive management. Overall, the surveillance system promotes an efficient use of resources while providing defined power to detect early rat incursions, translating to reduced environmental, resourcing and financial costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Traditional causal modeling of health interventions tends to be linear in nature and lacks multidisciplinarity. Consequently, strategies for exercise prescription in health maintenance are typically group based and focused on the role of a common optimal health status template toward which all individuals should aspire. ----- ----- Materials and methods: In this paper, we discuss inherent weaknesses of traditional methods and introduce an approach exercise training based on neurobiological system variability. The significance of neurobiological system variability in differential learning and training was highlighted.----- ----- Results: Our theoretical analysis revealed differential training as a method by which neurobiological system variability could be harnessed to facilitate health benefits of exercise training. It was observed that this approach emphasizes the importance of using individualized programs in rehabilitation and exercise, rather than group-based strategies to exercise prescription.----- ----- Conclusion: Research is needed on potential benefits of differential training as an approach to physical rehabilitation and exercise prescription that could counteract psychological and physical effects of disease and illness in subelite populations. For example, enhancing the complexity and variability of movement patterns in exercise prescription programs might alleviate effects of depression in nonathletic populations and physical effects of repetitive strain injuries experienced by athletes in elite and developing sport programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation, selection and finally decision making are all among important issues, which engineers face in long run of projects. Engineers implement mathematical and nonmathematical methods to make accurate and correct decisions, whenever needed. As extensive as these methods are, effects of any selected method on outputs achieved and decisions made are still suspicious. This is more controversial and challengeable, where evaluation is made among non-quantitative alternatives. In civil engineering and construction management problems, criteria include both quantitative and qualitative ones, such as aesthetic, construction duration, building and operation costs, and environmental considerations. As the result, decision making frequently takes place among non-quantitative alternatives. It should be noted that traditional comparison methods, including clear-cut and inflexible mathematics, have always been criticized. This paper demonstrates a brief review of traditional methods of evaluating alternatives. It also offers a new decision making method using, fuzzy calculations. The main focus of this research is some engineering issues, which have flexible nature and vague borders. Suggested method provides analyzability of evaluation for decision makers. It is also capable to overcome multi criteria and multi-referees problems. In order to ease calculations, a program named DeMA is introduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional approaches to the use of machine learning algorithms do not provide a method to learn multiple tasks in one-shot on an embodied robot. It is proposed that grounding actions within the sensory space leads to the development of action-state relationships which can be re-used despite a change in task. A novel approach called an Experience Network is developed and assessed on a real-world robot required to perform three separate tasks. After grounded representations were developed in the initial task, only minimal further learning was required to perform the second and third task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past 20 years, mesoporous materials have been attracted great attention due to their significant feature of large surface area, ordered mesoporous structure, tunable pore size and volume, and well-defined surface property. They have many potential applications, such as catalysis, adsorption/separation, biomedicine, etc. [1]. Recently, the studies of the applications of mesoporous materials have been expanded into the field of biomaterials science. A new class of bioactive glass, referred to as mesoporous bioactive glass (MBG), was first developed in 2004. This material has a highly ordered mesopore channel structure with a pore size ranging from 5–20 nm [1]. Compared to non-mesopore bioactive glass (BG), MBG possesses a more optimal surface area, pore volume and improved in vitro apatite mineralization in simulated body fluids [1,2]. Vallet-Regí et al. has systematically investigated the in vitro apatite formation of different types of mesoporous materials, and they demonstrated that an apatite-like layer can be formed on the surfaces of Mobil Composition of Matters (MCM)-48, hexagonal mesoporous silica (SBA-15), phosphorous-doped MCM-41, bioglass-containing MCM-41 and ordered mesoporous MBG, allowing their use in biomedical engineering for tissue regeneration [2-4]. Chang et al. has found that MBG particles can be used for a bioactive drug-delivery system [5,6]. Our study has shown that MBG powders, when incorporated into a poly (lactide-co-glycolide) (PLGA) film, significantly enhance the apatite-mineralization ability and cell response of PLGA films. compared to BG [7]. These studies suggest that MBG is a very promising bioactive material with respect to bone regeneration. It is known that for bone defect repair, tissue engineering represents an optional method by creating three-dimensional (3D) porous scaffolds which will have more advantages than powders or granules as 3D scaffolds will provide an interconnected macroporous network to allow cell migration, nutrient delivery, bone ingrowth, and eventually vascularization [8]. For this reason, we try to apply MBG for bone tissue engineering by developing MBG scaffolds. However, one of the main disadvantages of MBG scaffolds is their low mechanical strength and high brittleness; the other issue is that they have very quick degradation, which leads to an unstable surface for bone cell growth limiting their applications. Silk fibroin, as a new family of native biomaterials, has been widely studied for bone and cartilage repair applications in the form of pure silk or its composite scaffolds [9-14]. Compared to traditional synthetic polymer materials, such as PLGA and poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV), the chief advantage of silk fibroin is its water-soluble nature, which eliminates the need for organic solvents, that tend to be highly cytotoxic in the process of scaffold preparation [15]. Other advantages of silk scaffolds are their excellent mechanical properties, controllable biodegradability and cytocompatibility [15-17]. However, for the purposes of bone tissue engineering, the osteoconductivity of pure silk scaffolds is suboptimal. It is expected that combining MBG with silk to produce MBG/silk composite scaffolds would greatly improve their physiochemical and osteogenic properties for bone tissue engineering application. Therefore, in this chapter, we will introduce the research development of MBG/silk scaffolds for bone tissue engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New-generation biomaterials for bone regenerations should be highly bioactive, resorbable and mechanically strong. Mesoporous bioactive glass (MBG), as a novel bioactive material, has been used for the study of bone regeneration due to its excellent bioactivity, degradation and drug-delivery ability; however, how to construct a 3D MBG scaffold (including other bioactive inorganic scaffolds) for bone regeneration still maintains a significant challenge due to its/their inherit brittleness and low strength. In this brief communication, we reported a new facile method to prepare hierarchical and multifunctional MBG scaffolds with controllable pore architecture, excellent mechanical strength and mineralization ability for bone regeneration application by a modified 3D-printing technique using polyvinylalcohol (PVA), as a binder. The method provides a new way to solve the commonly existing issues for inorganic scaffold materials, for example, uncontrollable pore architecture, low strength, high brittleness and the requirement for the second sintering at high temperature. The obtained 3D-printing MBG scaffolds possess a high mechanical strength which is about 200 times for that of traditional polyurethane foam template-resulted MBG scaffolds. They have highly controllable pore architecture, excellent apatite-mineralization ability and sustained drug-delivery property. Our study indicates that the 3D-printed MBG scaffolds may be an excellent candidate for bone regeneration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology-mediated collaboration process has been extensively studied for over a decade. Most applications with collaboration concepts reported in the literature focus on enhancing efficiency and effectiveness of the decision-making processes in objective and well-structured workflows. However, relatively few previous studies have investigated the applications of collaboration schemes to problems with subjective and unstructured nature. In this paper, we explore a new intelligent collaboration scheme for fashion design which, by nature, relies heavily on human judgment and creativity. Techniques such as multicriteria decision making, fuzzy logic, and artificial neural network (ANN) models are employed. Industrial data sets are used for the analysis. Our experimental results suggest that the proposed scheme exhibits significant improvement over the traditional method in terms of the time–cost effectiveness, and a company interview with design professionals has confirmed its effectiveness and significance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender systems are one of the recent inventions to deal with ever growing information overload. Collaborative filtering seems to be the most popular technique in recommender systems. With sufficient background information of item ratings, its performance is promising enough. But research shows that it performs very poor in a cold start situation where previous rating data is sparse. As an alternative, trust can be used for neighbor formation to generate automated recommendation. User assigned explicit trust rating such as how much they trust each other is used for this purpose. However, reliable explicit trust data is not always available. In this paper we propose a new method of developing trust networks based on user’s interest similarity in the absence of explicit trust data. To identify the interest similarity, we have used user’s personalized tagging information. This trust network can be used to find the neighbors to make automated recommendations. Our experiment result shows that the proposed trust based method outperforms the traditional collaborative filtering approach which uses users rating data. Its performance improves even further when we utilize trust propagation techniques to broaden the range of neighborhood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.