879 resultados para Multi variate analysis
Resumo:
This work explores the multi-element capabilities of inductively coupled plasma - mass spectrometry with collision/reaction cell technology (CCT-ICP-MS) for the simultaneous determination of both spectrally interfered and non-interfered nuclides in wine samples using a single set of experimental conditions. The influence of the cell gas type (i.e. He, He+H2 and He+NH3), cell gas flow rate and sample pre-treatment (i.e. water dilution or acid digestion) on the background-equivalent concentration (BEC) of several nuclides covering the mass range from 7 to 238 u has been studied. Results obtained in this work show that, operating the collision/reaction cell with a compromise cell gas flow rate (i.e. 4 mL min−1) improves BEC values for interfered nuclides without a significant effect on the BECs for non-interfered nuclides, with the exception of the light elements Li and Be. Among the different cell gas mixtures tested, the use of He or He+H2 is preferred over He+NH3 because NH3 generates new spectral interferences. No significant influence of the sample pre-treatment methodology (i.e. dilution or digestion) on the multi-element capabilities of CCT-ICP-MS in the context of simultaneous analysis of interfered and non-interfered nuclides was observed. Nonetheless, sample dilution should be kept at minimum to ensure that light nuclides (e.g. Li and Be) could be quantified in wine. Finally, a direct 5-fold aqueous dilution is recommended for the simultaneous trace and ultra-trace determination of spectrally interfered and non-interfered elements in wine by means of CCT-ICP-MS. The use of the CCT is mandatory for interference-free ultra-trace determination of Ti and Cr. Only Be could not be determined when using the CCT due to a deteriorated limit of detection when compared to conventional ICP-MS.
Resumo:
The aim of this research work was primarily to examine the relevance of patient parameters, ward structures, procedures and practices, in respect of the potential hazards of wound cross-infection and nasal colonisation with multiple resistant strains of Staphylococcus aureus, which it is thought might provide a useful indication of a patient's general susceptibility to wound infection. Information from a large cross-sectional survey involving 12,000 patients from some 41 hospitals and 375 wards was collected over a five-year period from 1967-72, and its validity checked before any subsequent analysis was carried out. Many environmental factors and procedures which had previously been thought (but never conclusively proved) to have an influence on wound infection or nasal colonisation rates, were assessed, and subsequently dismissed as not being significant, provided that the standard of the current range of practices and procedures is maintained and not allowed to deteriorate. Retrospective analysis revealed that the probability of wound infection was influenced by the patient's age, duration of pre-operative hospitalisation, sex, type of wound, presence and type of drain, number of patients in ward, and other special risk factors, whilst nasal colonisation was found to be influenced by the patient's age, total duration of hospitalisation, sex, antibiotics, proportion of occupied beds in the ward, average distance between bed centres and special risk factors. A multi-variate regression analysis technique was used to develop statistical models, consisting of variable patient and environmental factors which were found to have a significant influence on the risks pertaining to wound infection and nasal colonisation. A relationship between wound infection and nasal colonisation was then established and this led to the development of a more advanced model for predicting wound infections, taking advantage of the additional knowledge of the patient's state of nasal colonisation prior to operation.
Resumo:
Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.
Resumo:
Capability development is at the heart of creating competitive advantage. This thesis intends to conceptualise Strategic Capability Development as a renewal of an organisation's existing capability in line with the requirements of the market. It followed and compared four product innovation projects within Iran Khodro Company (IKCO), an exemplar of capability development within the Iranian Auto industry. Findings show that the maturation of strategic capability at the organisational level has occurred through a sequence of product innovation projects and by dynamically shaping the learning and knowledge integration processes in accordance with emergence of the new structure within the industry. Accordingly, Strategic Capability Development is conceptualised in an interpretive model. Such findings are useful for development of an explanatory model and a practical capability development framework for managing learning and knowledge across different product innovation projects.
Resumo:
Introduction: Research that has focused on the ability of self-report assessment tools to predict crash outcomes has proven to be mixed. As a result, researchers are now beginning to explore whether examining culpability of crash involvement can subsequently improve this predictive efficacy. This study reports on the application of the Manchester Driver Behaviour Questionnaire (DBQ) to predict crash involvement among a sample of general Queensland motorists, and in particular, whether including a crash culpability variable improves predictive outcomes. Surveys were completed by 249 general motorists on-line or via a pen-and-paper format. Results: Consistent with previous research, a factor analysis revealed a three factor solution for the DBQ accounting for 40.5% of the overall variance. However, multivariate analysis using the DBQ revealed little predictive ability of the tool to predict crash involvement. Rather, exposure to the road was found to be predictive of crashes. An analysis into culpability revealed 88 participants reported being “at fault” for their most recent crash. Corresponding between and multi-variate analyses that included the culpability variable did not result in an improvement in identifying those involved in crashes. Conclusions: While preliminary, the results suggest that including crash culpability may not necessarily improve predictive outcomes in self-report methodologies, although it is noted the current small sample size may also have had a deleterious effect on this endeavour. This paper also outlines the need for future research (which also includes official crash and offence outcomes) to better understand the actual contribution of self-report assessment tools, and culpability variables, to understanding and improving road safety.
Resumo:
This paper presents a performance-based optimisation approach for conducting trade-off analysis between safety (roads) and condition (bridges and roads). Safety was based on potential for improvement (PFI). Road condition was based on surface distresses and bridge condition was based on apparent age per subcomponent. The analysis uses a non-monetised optimisation that expanded upon classical Pareto optimality by observing performance across time. It was found that achievement of good results was conditioned by the availability of early age treatments and impacted by a frontier effect preventing the optimisation algorithm from realising of the long-term benefits of deploying actions when approaching the end of the analysis period. A disaggregated bridge condition index proved capable of improving levels of service in bridge subcomponents.
Resumo:
Several genetic variants are thought to influence white matter (WM) integrity, measured with diffusion tensor imaging (DTI). Voxel based methods can test genetic associations, but heavy multiple comparisons corrections are required to adjust for searching the whole brain and for all genetic variants analyzed. Thus, genetic associations are hard to detect even in large studies. Using a recently developed multi-SNP analysis, we examined the joint predictive power of a group of 18 cholesterol-related single nucleotide polymorphisms (SNPs) on WM integrity, measured by fractional anisotropy. To boost power, we limited the analysis to brain voxels that showed significant associations with total serum cholesterol levels. From this space, we identified two genes with effects that replicated in individual voxel-wise analyses of the whole brain. Multivariate analyses of genetic variants on a reduced anatomical search space may help to identify SNPs with strongest effects on the brain from a broad panel of genes.
Resumo:
Ecosystem based management requires the integration of various types of assessment indicators. Understanding stakeholders' information preferences is important, in selecting those indicators that best support management and policy. Both the preferences of decision-makers and the general public may matter, in democratic participatory management institutions. This paper presents a multi-criteria analysis aimed at quantifying the relative importance to these groups of economic, ecological and socio-economic indicators usually considered when managing ecosystem services in a coastal development context. The Analytic Hierarchy Process (AHP) is applied within two nationwide surveys in Australia, and preferences of both the general public and decision-makers for these indicators are elicited and compared. Results show that, on average across both groups, the priority in assessing a generic coastal development project is for the ecological assessment of its impacts on marine biodiversity. Ecological assessment indicators are globally preferred to both economic and socio-economic indicators regardless of the nature of the impacts studied. These results are observed for a significantly larger proportion of decision-maker than general public respondents, questioning the extent to which the general public's preferences are well reflected in decision-making processes.
Resumo:
Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In the field of vibration-based damage detection of concrete structures efficient damage models are needed to better understand changes in the vibration properties of cracked structures. These models should quantitatively replicate the damage mechanisms in concrete and easily be used as damage detection tools. In this paper, the flexural cracking behaviour of plain concrete prisms subject to monotonic and cyclic loading regimes under displacement control is tested experimentally and modelled numerically. Four-point bending tests on simply supported un-notched prisms are conducted, where the cracking process is monitored using a digital image correlation system. A numerical model, with a single crack at midspan, is presented where the cracked zone is modelled using the fictitious crack approach and parts outside that zone are treated in a linear-elastic manner. The model considers crack initiation, growth and closure by adopting cyclic constitutive laws. A multi-variate Newton-Raphson iterative solver is used to solve the non-linear equations to ensure equilibrium and compatibility at the interface of the cracked zone. The numerical results agree well with the experiments for both loading scenarios. The model shows good predictions of the degradation of stiffness with increasing load. It also approximates the crack-mouth-opening-displacement when compared with the experimental data of the digital image correlation system. The model is found to be computationally efficient as it runs full analysis for cyclic loading in less than 2. min, and it can therefore be used within the damage detection process. © 2013 Elsevier Ltd.
Resumo:
At present the vast majority of Computer-Aided- Engineering (CAE) analysis calculations for microelectronic and microsystems technologies are undertaken using software tools that focus on single aspects of the physics taking place. For example, the design engineer may use one code to predict the airflow and thermal behavior of an electronic package, then another code to predict the stress in solder joints, and then yet another code to predict electromagnetic radiation throughout the system. The reason for this focus of mesh-based codes on separate parts of the governing physics is essentially due to the numerical technologies used to solve the partial differential equations, combined with the subsequent heritage structure in the software codes. Using different software tools, that each requires model build and meshing, leads to a large investment in time, and hence cost, to undertake each of the simulations. During the last ten years there has been significant developments in the modelling community around multi- physics analysis. These developments are being followed by many of the code vendors who are now providing multi-physics capabilities in their software tools. This paper illustrates current capabilities of multi-physics technology and highlights some of the future challenges
Resumo:
In this paper, taking advantage of the inclusion of a special module on material deprivation in EU-SILC 2009. we provide a comparative analysis of patterns of deprivation. Our analysis identifies six relatively distinct dimensions of deprivation with generally satisfactory overall levels of reliability and mean levels of reliability across countries. Multi-level analysis based on 28 European countries reveals systematic variation in the importance of within and between country variation for a range of deprivation dimensions. The basic deprivation dimension is the sole dimension to display a graduated pattern of variation across countries. It also reveals the highest correlations with national and household income, the remaining deprivation dimensions and economic stress. It comes closest to capturing an underlying dimension of generalized deprivation that can provide the basis for a comparative European analysis of exclusion from customary standards of living. A multilevel analysis revealed that a range of household characteristics and household reference person socio-economic factors were related to basic deprivation and controlling for contextual differences in such factors allowed us to account for substantial proportions of both within and between country variance. The addition of macro-economic factors relating to average levels of disposable income and income inequality contributed relatively little further in the way of explanatory power. Further analysis revealed the existence of a set of significant interactions between micro socioeconomic attributes and country level gross national disposable income per capita. The impact of socio-economic differentiation was significantly greater where average income levels were lower. Or, in other words, the impact of the latter was greater for more disadvantaged socio-economic groups. Our analysis supports the suggestion that an emphasis on the primary role of income inequality to the neglect of differences in absolute levels of income may be misleading in important respects. (C) 2012 International Sociological Association Research Committee 28 on Social Stratification and Mobility. Published by Elsevier Ltd. All rights reserved.
Resumo:
Here we present the first high-resolution multi-proxy analysis of a rich fen in the central-eastern European lowlands. The fen is located in the young glacial landscape of the Sta{ogonek}zki river valley. We investigated the fen's development pathways, asking three main questions: (i) what was the pattern and timing of the peatland's vegetation succession, (ii) how did land use and climate affect the succession in the fen ecosystem, and (iii) to what degree does the reconstructed hydrology for this site correlate with those of other sites in the region in terms of past climate change? Several stages of fen history were determined, beginning with the lake-to-fen transition ca. AD 700. Brown mosses dominated the sampling site from this period to the present. No human impact was found to have occurred until ca. AD 1700, when the first forest cutting began. Around AD 1890 a more significant disturbance took place-this date marks the clear cutting of forests and dramatic landscape openness. Deforestation changed the hydrology and chemistry of the mire, which was revealed by a shift in local plant and testate amoebae communities. We also compared a potential climatic signal recorded in the peat profile before AD 1700 with other sites from the region. © 2013 John Wiley & Sons, Ltd.
Resumo:
This paper outlines the importance of robust interface management for facilitating finite element analysis workflows. Topological equivalences between analysis model representations are identified and maintained in an editable and accessible manner. The model and its interfaces are automatically represented using an analysis-specific cellular decomposition of the design space. Rework of boundary conditions following changes to the design geometry or the analysis idealization can be minimized by tracking interface dependencies. Utilizing this information with the Simulation Intent specified by an analyst, automated decisions can be made to process the interface information required to rebuild analysis models. Through this work automated boundary condition application is realized within multi-component, multi-resolution and multi-fidelity analysis workflows.