20 resultados para CHARGE DECOMPOSITION ANALYSIS

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the determinants of the fertility rate in Taiwan over the period 1966–2001. Consistent with theory, the key explanatory variables in Taiwan's fertility model are real income, infant mortality rate, female education and female labor force participation rate. The test for cointegration is based on the recently developed bounds testing procedure while the long-run and short-run elasticities are based on the autoregressive distributed lag model. Among our key results, female education and female labor force participation rate are found to be the key determinants of fertility in Taiwan in the long run. The variance decom-position analysis indicates that in the long run approximately 45percent of the variation in fertility is explained by the combined impact of female labor force participation, mortality and income, implying that socioeconomic development played an important role in the fertility transition in Taiwan. This result is consistent with the traditional structural hypothesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we analyse per capita income levels of China's three main regions: the western region, the eastern region and the central region using common cycle and common trend tests. Our main contribution is that we impose the common cycle and common trend restrictions in decomposing shocks into permanent and transitory components. We find that: (i) there is evidence for two cointegrating relationships and one common cycle; and (ii) the variance decomposition analysis of shocks provides evidence that over short horizons, permanent shocks play a large role in explaining variations in regional per capita incomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract The thermal decomposition of PVA and PVA composites during the melting-crystallization process is still unclear due to indistinct changes in chemical compositions. Using graphene as a model, the decomposition properties of PVA and PVA-graphene composites were systematically analyzed under multiple melting-crystallization cycles. And a series of isothermal decomposition experiments around the melting-crystallization temperature were carried out to simulate the corresponding decomposition kinetics. Based on multiple cycle melting-crystallization, the weight loss of PVA and PVA/graphene composites was successfully quantified. Further morphology investigation and chemical structure analysis indicated that the decomposition was non-uniformly distributed, rendering the possibility of crystallization for PVA and PVA/graphene composites after multiple heating-cooling cycles. In addition, isothermal decomposition analysis based on reduced time plot approach and model-free iso-conversional method indicated that Avrami-Eroffev model could well match the decomposition process of the neat PVA and PG-0.3 composite, while the Avrami-Eroffev and first order models could precisely forecast the decomposition of PG-0.9 composite. Both analyses during multiple cycle melting-crystallization and isothermal decomposition demonstrated that graphene served as decomposition accelerator in the whole thermal decomposition process, and particularly the decomposition of neat PVA and PVA/graphene composites was highly related to the band area ratios of C-H and O-H vibrations in Fourier transform infrared (FTIR) spectrum.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The year 1968 saw a major shift from univariate to multivariate methodological approaches to ratio-based modelling of corporate collapse. This was facilitated by the introduction of a new statistical tool called Multiple Discriminant Analysis (MDA). However, it did not take long before other statistical tools were developed. The primary objective for developing these tools was to enable deriving models that would at least do as good a job asMDA, but rely on fewer assumptions. With the introduction of new statistical tools, researchers became pre-occupied with testing them in signalling collapse. lLTUong the ratio-based approaches were Logit analysis, Neural Network analysis, Probit analysis, ID3, Recursive Partitioning Algorithm, Rough Sets analysis, Decomposition analysis, Going Concern Advisor, Koundinya and Purl judgmental approach, Tabu Search and Mixed Logit analysis. Regardless of which methodological approach was chosen, most were compared to MDA. This paper reviews these various approaches. Emphasis is placed on how they fared against MDA in signalling corporate collapse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we analyze per capita incomes of the G7 countries using the common cycles test developed by Vahid and Engle (Journal of Applied Econometrics, 8:341–360, 1993) and extended by Hecq et al. (Oxford Bulletin of Economics and Statistics, 62:511–532, 2000; Econometric Reviews, 21:273–307, 2002) and the common trend test developed by Johansen (Journal of Economic Dynamics and Control, 12:231–254, 1988). Our main contribution is that we impose the common cycle and common trend restrictions in decomposing the innovations into permanent and transitory components. Our main finding is permanent shocks explain the bulk of the variations in incomes for the G7 countries over short time horizons, and is in sharp contrast to the bulk of the recent literature. We attribute this to the greater forecasting accuracy achieved, which we later confirm through performing a post sample forecasting exercise, from the variance decomposition analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to analyse the interdependencies of the house price growth rates in Australian capital cities.
Design/methodology/approach - A vector autoregression model and variance decomposition are introduced to estimate and interpret the interdependences among the growth rates of regional house prices in Australia.
Findings - The results suggest the eight capital cities can be divided into three groups: Sydney and Melbourne; Canberra, Adelaide and Brisbane; and Hobart, Perth and Darwin.
Originality/value - Based on the structural vector autoregression model, this research develops an innovative interdependence analysis approach of regional house prices based on a variance decomposition method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper utilises the Juhn Murphy and Pierce (1991) decomposition to shed light on the pattern of slow male-female wage convergence in Australia over the 1980s. The analysis allows one to distinguish between the role of wage structure and genderspecific effects. The central question addressed is whether rising wage inequality counteracted the forces of increased female investment in labour market skills, i.e. education and experience. The conclusion is that in contrast to the US and the UK, Australian women do not appear to have been swimming against a tide of adverse wage structure changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining refers to extracting or "mining" knowledge from large amounts of data. It is also called a method of "knowledge presentation" where visualization and knowledge representation techniques are used to present the mined knowledge to the user. Efficient algorithms to mine frequent patterns are crucial to many tasks in data mining. Since the Apriori algorithm was proposed in 1994, there have been several methods proposed to improve its performance. However, most still adopt its candidate set generation-and-test approach. In addition, many methods do not generate all frequent patterns, making them inadequate to derive association rules. The Pattern Decomposition (PD) algorithm that can significantly reduce the size of the dataset on each pass makes it more efficient to mine all frequent patterns in a large dataset. This algorithm avoids the costly process of candidate set generation and saves a large amount of counting time to evaluate support with reduced datasets. In this paper, some existing frequent pattern generation algorithms are explored and their comparisons are discussed. The results show that the PD algorithm outperforms an improved version of Apriori named Direct Count of candidates & Prune transactions (DCP) by one order of magnitude and is faster than an improved FP-tree named as Predictive Item Pruning (PIP). Further, PD is also more scalable than both DCP and PIP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new objective fabric pilling grading method based on wavelet texture analysis was developed. The new method created a complex texture feature vector based on the wavelet detail coefficients from all decomposition levels and horizontal, vertical and diagonal orientations, permitting a much richer and more complete representation of pilling texture in the image to be used as a basis for classification. Standard multi-factor classification techniques of principal components analysis and discriminant analysis were then used to classify the pilling samples into five pilling degrees. The preliminary investigation of the method was performed using standard pilling image sets of knitted, woven and non-woven fabrics. The results showed that this method could successfully evaluate the pilling intensity of knitted, woven and non-woven fabrics by selecting the suitable wavelet and associated analysis scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multiresolution technique based on multiwavelets scale-space representation for stereo correspondence estimation is presented. The technique uses the well-known coarse-to-fine strategy, involving the calculation of stereo correspondences at the coarsest resolution level with consequent refinement up to the finest level. Vector coefficients of the multiwavelets transform modulus are used as corresponding features, where modulus maxima defines the shift invariant high-level features (multiscale edges) with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps. Illuminative variation that can exist between the perspective views of the same scene is controlled using scale normalization at each decomposition level by dividing the details space coefficients with approximation space. The problems of ambiguity, explicitly, and occlusion, implicitly, are addressed by using a geometric topological refinement procedure. Geometric refinement is based on a symbolic tagging procedure introduced to keep only the most consistent matches in consideration. Symbolic tagging is performed based on probability of occurrence and multiple thresholds. The whole procedure is constrained by the uniqueness and continuity of the corresponding stereo features. The comparative performance of the proposed algorithm with eight famous existing algorithms, presented in the literature, is shown to validate the claims of promising performance of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, much attention has been given to the mass spectrometry (MS) technology based disease classification, diagnosis, and protein-based biomarker identification. Similar to microarray based investigation, proteomic data generated by such kind of high-throughput experiments are often with high feature-to-sample ratio. Moreover, biological information and pattern are compounded with data noise, redundancy and outliers. Thus, the development of algorithms and procedures for the analysis and interpretation of such kind of data is of paramount importance. In this paper, we propose a hybrid system for analyzing such high dimensional data. The proposed method uses the k-mean clustering algorithm based feature extraction and selection procedure to bridge the filter selection and wrapper selection methods. The potential informative mass/charge (m/z) markers selected by filters are subject to the k-mean clustering algorithm for correlation and redundancy reduction, and a multi-objective Genetic Algorithm selector is then employed to identify discriminative m/z markers generated by k-mean clustering algorithm. Experimental results obtained by using the proposed method indicate that it is suitable for m/z biomarker selection and MS based sample classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a ‘black box’ and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws. In industry-based studies, known as ‘process analyses’, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, ‘input-output analysis’, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable. Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions. Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects. A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a ‘black box’, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, ‘direct energy paths’). A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a ‘product quantity’ (for example, the total tonnes of cement used) and a ‘direct energy intensity’ (for example, the energy required directly for cement manufacture, per tonne). The input-output model was decomposed into direct energy paths for the ‘residential building construction’ sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for ‘residential building construction’. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects. A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because: • typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity); • of the complexity of the paths derived for ‘residential building construction’; and • of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities. While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for ‘residential building construction’. Consequently, both the value and relative importance of the direct energy paths for ‘residential building construction’ were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of products—even in non-standard cases. ‘Residential building construction’ was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for ‘residential building construction’. For more complex sectors than ‘residential building construction’, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component). To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, ‘banking’) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor). However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework. Plain language summary Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated. This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a ‘black box’ subject to errors. A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements. These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate calcium chloride coagulation technology, two kinds of raw natural rubber samples were produced by calcium chloride and acetic acid respectively. Plasticity retention index (PRI), thermal degradation process, thermal degradation kinetics and differential thermal analysis of two samples studied. Furthermore, thermal degradation activation energy, pre-exponential factor and rate constant were calculated. The results show that natural rubber produced by calcium chloride possesses good mechanical property and poor thermo-stability in comparison to natural rubber produced by acetic acid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a single-walled boron nitride nanotube (SWBNNT)-based cantilever biosensor, and investigate its bending deformation. The BNNT-based cantilever is modelled by accounting that the surface of the cantilever beam is coated with the antibody molecule. We have considered two main approaches for the mechanical deformation of the BNNT beam. The first one is differential surface stress produced by the binding of biomolecules onto its surface, and the second one is the charge released from the biomolecular interaction. In addition, other parameters including length of beam, variation of beam’s location and chiralities of the BNNT have been taken into consideration to design the cantilever biosensor. The computed results are in good agreement with the well known electrostatic equations that govern the deformation of the cantilever.