940 resultados para correlated fading


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monotony has been identified as a contributing factor to road crashes. Drivers’ ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks, such as driving on Australian rural roads, many of which are monotonous by nature. Highway design in particular attempts to reduce the driver’s task to a merely lane-keeping one. Such a task provides little stimulation and is monotonous, thus affecting the driver’s attention which is no longer directed towards the road. Inattention contributes to crashes, especially for professional drivers. Monotony has been studied mainly from the endogenous perspective (for instance through sleep deprivation) without taking into account the influence of the task itself (repetitiveness) or the surrounding environment. The aim and novelty of this thesis is to develop a methodology (mathematical framework) able to predict driver lapses of vigilance under monotonous environments in real time, using endogenous and exogenous data collected from the driver, the vehicle and the environment. Existing approaches have tended to neglect the specificity of task monotony, leaving the question of the existence of a “monotonous state” unanswered. Furthermore the issue of detecting vigilance decrement before it occurs (predictions) has not been investigated in the literature, let alone in real time. A multidisciplinary approach is necessary to explain how vigilance evolves in monotonous conditions. Such an approach needs to draw on psychology, physiology, road safety, computer science and mathematics. The systemic approach proposed in this study is unique with its predictive dimension and allows us to define, in real time, the impacts of monotony on the driver’s ability to drive. Such methodology is based on mathematical models integrating data available in vehicles to the vigilance state of the driver during a monotonous driving task in various environments. The model integrates different data measuring driver’s endogenous and exogenous factors (related to the driver, the vehicle and the surrounding environment). Electroencephalography (EEG) is used to measure driver vigilance since it has been shown to be the most reliable and real time methodology to assess vigilance level. There are a variety of mathematical models suitable to provide a framework for predictions however, to find the most accurate model, a collection of mathematical models were trained in this thesis and the most reliable was found. The methodology developed in this research is first applied to a theoretically sound measure of sustained attention called Sustained Attention Response to Task (SART) as adapted by Michael (2010), Michael and Meuter (2006, 2007). This experiment induced impairments due to monotony during a vigilance task. Analyses performed in this thesis confirm and extend findings from Michael (2010) that monotony leads to an important vigilance impairment independent of fatigue. This thesis is also the first to show that monotony changes the dynamics of vigilance evolution and tends to create a “monotonous state” characterised by reduced vigilance. Personality traits such as being a low sensation seeker can mitigate this vigilance decrement. It is also evident that lapses in vigilance can be predicted accurately with Bayesian modelling and Neural Networks. This framework was then applied to the driving task by designing a simulated monotonous driving task. The design of such task requires multidisciplinary knowledge and involved psychologist Rebecca Michael. Monotony was varied through both the road design and the road environment variables. This experiment demonstrated that road monotony can lead to driving impairment. Particularly monotonous road scenery was shown to have the most impact compared to monotonous road design. Next, this study identified a variety of surrogate measures that are correlated with vigilance levels obtained from the EEG. Such vigilance states can be predicted with these surrogate measures. This means that vigilance decrement can be detected in a car without the use of an EEG device. Amongst the different mathematical models tested in this thesis, only Neural Networks predicted the vigilance levels accurately. The results of both these experiments provide valuable information about the methodology to predict vigilance decrement. Such an issue is quite complex and requires modelling that can adapt to highly inter-individual differences. Only Neural Networks proved accurate in both studies, suggesting that these models are the most likely to be accurate when used on real roads or for further research on vigilance modelling. This research provides a better understanding of the driving task under monotonous conditions. Results demonstrate that mathematical modelling can be used to determine the driver’s vigilance state when driving using surrogate measures identified during this study. This research has opened up avenues for future research and could result in the development of an in-vehicle device predicting driver vigilance decrement. Such a device could contribute to a reduction in crashes and therefore improve road safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Persistent use of safety restraints prevents deaths and reduces the severity and number of injuries resulting from motor vehicle crashes. However, safety-restraint use rates in the United States have been below those of other nations with safety-restraint enforcement laws. With a better understanding of the relationship between safety-restraint law enforcement and safety-restraint use, programs can be implemented to decrease the number of deaths and injuries resulting from motor vehicle crashes. Does safety-restraint use increase as enforcement increases? Do motorists increase their safety-restraint use in response to the general presence of law enforcement or to targeted law enforcement efforts? Does a relationship between enforcement and restraint use exist at the countywide level? A logistic regression model was estimated by using county-level safety-restraint use data and traffic citation statistics collected in 13 counties within the state of Florida in 1997. The model results suggest that safety-restraint use is positively correlated with enforcement intensity, is negatively correlated with safety-restraint enforcement coverage (in lanemiles of enforcement coverage), and is greater in urban than rural areas. The quantification of these relationships may assist Florida and other law enforcement agencies in raising safety-restraint use rates by allocating limited funds more efficiently either by allocating additional time for enforcement activities of the existing force or by increasing enforcement staff. In addition, the research supports a commonsense notion that enforcement activities do result in behavioral response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Potential impacts of plantation forestry practices on soil organic carbon and Fe available to microorganisms were investigated in a subtropical coastal catchment. The impacts of harvesting or replanting were largely limited to the soil top layer (0–10 cm depth). The thirty-year-old Pinus plantation showed low soil moisture content (Wc) and relatively high levels of soil total organic carbon (TOC). Harvesting and replanting increased soil Wc but reduced TOC levels. Mean dissolved organic carbon (DOC) and microbial biomass carbon (MBC) increased in harvested or replanted soils, but such changes were not statistically significant (P > 0.05). Total dithionite-citrate and aqua regia-extractable Fe did not respond to forestry practices, but acid ammonium oxalate and pyrophosphate-extractable, bioavailable Fe decreased markedly after harvesting or replanting. Numbers of heterotrophic bacteria were significantly correlated with DOC levels (P < 0.05), whereas Fe-reducing bacteria and S-bacteria detected using laboratory cultivation techniques did not show strong correlation with either soil DOC or Fe content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current paradigm in soil organic matter (SOM) dynamics is that the proportion of biologically resistant SOM will increase when total SOM decreases. Recently, several studies have focused on identifying functional pools of resistant SOM consistent with expected behaviours. Our objective was to combine physical and chemical approaches to isolate and quantify biologically resistant SOM by applying acid hydrolysis treatments to physically isolated silt- and clay-sized soil fractions. Microaggegrate-derived and easily dispersed silt- and clay-sized fractions were isolated from surface soil samples collected from six long-term agricultural experiment sites across North America. These fractions were hydrolysed to quantify the non-hydrolysable fraction, which was hypothesized to represent a functional pool of resistant SOM. Organic C and total N concentrations in the four isolated fractions decreased in the order: native > no-till > conventional-till at all sites. Concentrations of non-hydrolysable C (NHC) and N (NHN) were strongly correlated with initial concentrations, and C hydrolysability was found to be invariant with management treatment. Organic C was less hydrolysable than N, and overall, resistance to acid hydrolysis was greater in the silt-sized fractions compared with the clay-sized fractions. The acid hydrolysis results are inconsistent with the current behaviour of increasing recalcitrance with decreasing SOM content: while %NHN was greater in cultivated soils compared with their native analogues, %NHC did not increase with decreasing total organic C concentrations. The analyses revealed an interaction between biochemical and physical protection mechanisms that acts to preserve SOM in fine mineral fractions, but the inconsistency of the pool size with expected behaviour remains to be fully explained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The literature was reviewed and analyzed to determine the feasibility of using a combination of acid hydrolysis and CO2-C release during long-term incubation to determine soil organic carbon (SOC) pool sizes and mean residence times (MRTs). Analysis of 1100 data points showed the SOC remaining after hydrolysis with 6 M HCI ranged from 30 to 80% of the total SOC depending on soil type, depth, texture, and management. Nonhydrolyzable carbon (NHC) in conventional till soils represented 48% of SOC; no-till averaged 56%, forest 55%, and grassland 56%. Carbon dates showed an average of 1200 yr greater MRT for the NHC fraction than total SOC. Longterm incubation, involving measurement of CO2 evolution and curve fitting, measured active and slow pools. Active-pool C comprised 2 to 8% of the SOC with MRTs of days to months; the slow pool comprised 45 to 65% of the SOC and had MRTs of 10 to 80 yr. Comparison of field C-14 and (13) C data with hydrolysis-incubation data showed a high correlation between independent techniques across soil types and experiments. There were large differences in MRTs depending on the length of the experiment. Insertion of hydrolysis-incubation derived estimates of active (C-a), slow (C-s), and resistant Pools (C-r) into the DAYCENT model provided estimates of daily field CO2 evolution rates. These were well correlated with field CO2 measurements. Although not without some interpretation problems, acid hydrolysis-laboratory incubation is useful for determining SOC pools and fluxes especially when used in combination with associated measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carbon pools and fluxes were quantified along an environmental gradient in northern Arizona. Data are presented on vegetation, litter, and soil C pools and soil CO2 fluxes from ecosystems ranging from shrub-steppe through woodlands to coniferous forest and the ecotones in between. Carbon pool sizes and fluxes in these semiarid ecosystems vary with temperature and precipitation and are strongly influenced by canopy cover. Ecosystem respiration is approximately 50 percent greater in the more mesic, forest environment than in the dry shrub-steppe environment. Soil respiration rates within a site vary seasonally with temperature but appear to be constrained by low soil moisture during dry summer months, when approximately 75% of total annual soil respiration occurs. Total annual amount of CO2 respired across all sites is positively correlated with annual precipitation and negatively correlated with temperature. Results suggest that changes in the amount and periodicity of precipitation will have a greater effect on C pools and fluxes than will changes in temperature :in the semiarid Southwestern United States.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate the influence of soft contact lenses on regional variations in corneal thickness and shape while taking account of natural diurnal variations in these corneal parameters. Methods: Twelve young, healthy subjects wore 4 different types of soft contact lenses on 4 different days. The lenses were of two different materials (silicone hydrogel, hydrogel), designs (spherical, toric) and powers (–3.00, –7.00 D). Corneal thickness and topography measurements were taken before and after 8 hours of lens wear and on two days without lens wear, using the Pentacam HR system. Results: The hydrogel toric contact lens caused the greatest level of corneal thickening in the central (20.3 ± 10.0 microns) as well as peripheral cornea (24.1 ± 9.1 microns) (p < 0.001) with an obvious regional swelling of the cornea beneath the stabilizing zones. The anterior corneal surface generally showed slight flattening. All contact lenses resulted in central posterior corneal steepening and this was weakly correlated with central corneal swelling (p = 0.03) and peripheral corneal swelling (p = 0.01). Conclusions: There was an obvious regional corneal swelling apparent after wear of the hydrogel soft toric lenses, due to the location of the thicker stabilization zones of the toric lenses. However with the exception of the hydrogel toric lens, the magnitude of corneal swelling induced by the contact lenses over the 8 hours of wear was less than the natural diurnal thinning of the cornea over this same period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the role of three strategies - organisational, business and information system – in post implementation of technological innovations. The findings reported in the paper are that improvements in operational performance can only be achieved by aligning technological innovation effectiveness with operational effectiveness. Design/methodology/approach – A combination of qualitative and quantitative methods was used to apply a two-stage methodological approach. Unstructured and semi structured interviews, based on the findings of the literature, were used to identify key factors used in the survey instrument design. Confirmatory factor analysis (CFA) was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. Findings – Initial findings suggest that organisations looking for improvements in operational performance through adoption of technological innovations need to align with operational strategies of the firm. Impact of operational effectiveness and technological innovation effectiveness are related directly and significantly to improved operational performance. Perception of increase of operational effectiveness is positively and significantly correlated with improved operational performance. The findings suggest that technological innovation effectiveness is also positively correlated with improved operational performance. However, the study found that there is no direct influence of strategiesorganisational, business and information systems (IS) - on improvement of operational performance. Improved operational performance is the result of interactions between the implementation of strategies and related outcomes of both technological innovation and operational effectiveness. Practical implications – Some organisations are using technological innovations such as enterprise information systems to innovate through improvements in operational performance. However, they often focus strategically only on effectiveness of technological innovation or on operational effectiveness. Such a focus will be detrimental in the long-term of the enterprise. This research demonstrated that it is not possible to achieve maximum returns through technological innovations as dimensions of operational effectiveness need to be aligned with technological innovations to improve their operational performance. Originality/value – No single technological innovation implementation can deliver a sustained competitive advantage; rather, an advantage is obtained through the capacity of an organisation to exploit technological innovations’ functionality on a continuous basis. To achieve sustainable results, technology strategy must be aligned with organisational and operational strategies. This research proposes the key performance objectives and dimensions that organisations should focus to achieve a strategic alignment. Research limitations/implications – The principal limitation of this study is that the findings are based on investigation of small sample size. There is a need to explore the appropriateness of influence of scale prior to generalizing the results of this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently published studies not only demonstrated that laser printers are often significant sources of ultrafine particles, but they also shed light on particle formation mechanisms. While the role of fuser roller temperature as a factor affecting particle formation rate has been postulated, its impact has never been quantified. To address this gap in knowledge, this study measured emissions from 30 laser printers in chamber using a standardized printing sequence, as well as monitoring fuser roller temperature. Based on a simplified mass balance equation, the average emission rates of particle number, PM2.5 and O3 were calculated. The results showed that: almost all printers were found to be high particle number emitters (i.e. > 1.01×1010 particles/min); colour printing generated more PM2.5 than monochrome printing; and all printers generated significant amounts of O3. Particle number emissions varied significantly during printing and followed the cycle of fuser roller temperature variation, which points to temperature being the strongest factor controlling emissions. For two sub-groups of printers using the same technology (heating lamps), systematic positive correlations, in the form of a power law, were found between average particle number emission rate and average roller temperature. Other factors, such as fuser material and structure, are also thought to play a role, since no such correlation was found for the remaining two sub-groups of printers using heating lamps, or for the printers using heating strips. In addition, O3 and total PM2.5 were not found to be statistically correlated with fuser temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background, Aim and Scope The impact of air pollution on school children’s health is currently one of the key foci of international and national agencies. Of particular concern are ultrafine particles which are emitted in large quantities, contain large concentrations of toxins and are deposited deeply in the respiratory tract. Materials and methods In this study, an intensive sampling campaign of indoor and outdoor airborne particulate matter was carried out in a primary school in February 2006 to investigate indoor and outdoor particle number (PN) and mass concentrations (PM2.5), and particle size distribution, and to evaluate the influence of outdoor air pollution on the indoor air. Results For outdoor PN and PM2.5, early morning and late afternoon peaks were observed on weekdays, which are consistent with traffic rush hours, indicating the predominant effect of vehicular emissions. However, the temporal variations of outdoor PM2.5 and PN concentrations occasionally showed extremely high peaks, mainly due to human activities such as cigarette smoking and the operation of mower near the sampling site. The indoor PM2.5 level was mainly affected by the outdoor PM2.5 (r = 0.68, p<0.01), whereas the indoor PN concentration had some association with outdoor PN values (r = 0.66, p<0.01) even though the indoor PN concentration was occasionally influenced by indoor sources, such as cooking, cleaning and floor polishing activities. Correlation analysis indicated that the outdoor PM2.5 was inversely correlated with the indoor to outdoor PM2.5 ratio (I/O ratio) (r = -0.49, p<0.01), while the indoor PN had a weak correlation with the I/O ratio for PN (r = 0.34, p<0.01). Discussion and Conclusions The results showed that occupancy did not cause any major changes to the modal structure of particle number and size distribution, even though the I/O ratio was different for different size classes. The I/O curves had a maximum value for particles with diameters of 100 – 400 nm under both occupied and unoccupied scenarios, whereas no significant difference in I/O ratio for PM2.5 was observed between occupied and unoccupied conditions. Inspection of the size-resolved I/O ratios in the preschool centre and the classroom suggested that the I/O ratio in the preschool centre was the highest for accumulation mode particles at 600 nm after school hours, whereas the average I/O ratios of both nucleation mode and accumulation mode particles in the classroom were much lower than those of Aitken mode particles. Recommendations and Perspectives The findings obtained in this study are useful for epidemiological studies to estimate the total personal exposure of children, and to develop appropriate control strategies for minimizing the adverse health effects on school children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measurements in the exhaust plume of a petrol-driven motor car showed that molecular cluster ions of both signs were present in approximately equal amounts. The emission rate increased sharply with engine speed while the charge symmetry remained unchanged. Measurements at the kerbside of nine motorways and five city roads showed that the mean total cluster ion concentration near city roads (603 cm-3) was about one-half of that near motorways (1211 cm-3) and about twice as high as that in the urban background (269 cm-3). Both positive and negative ion concentrations near a motorway showed a significant linear increase with traffic density (R2=0.3 at p<0.05) and correlated well with each other in real time (R2=0.87 at p<0.01). Heavy duty diesel vehicles comprised the main source of ions near busy roads. Measurements were conducted as a function of downwind distance from two motorways carrying around 120-150 vehicles per minute. Total traffic-related cluster ion concentrations decreased rapidly with distance, falling by one-half from the closest approach of 2m to 5m of the kerb. Measured concentrations decreased to background at about 15m from the kerb when the wind speed was 1.3 m s-1, this distance being greater at higher wind speed. The number and net charge concentrations of aerosol particles were also measured. Unlike particles that were carried downwind to distances of a few hundred metres, cluster ions emitted by motor vehicles were not present at more than a few tens of metres from the road.