734 resultados para While programak
Resumo:
The application of high-speed machine vision for close-loop position control, or visual servoing, of a robot manipulator. It provides a comprehensive coverage of all aspects of the visual servoing problem: robotics, vision, control, technology and implementation issues. While much of the discussion is quite general the experimental work described is based on the use of a high-speed binary vision system with a monocular "eye-in-hand" camera.
Resumo:
Most studies on the characterisation of deposits on heat exchangers have been based on bulk analysis, neglecting the fine structural features and the compositional profiles of layered deposits. Attempts have been made to fully characterise a fouled stainless steel tube obtained from a quintuple Roberts evaporator of a sugar factory using X-ray diffraction and scanning electron microscopy techniques. The deposit contains three layers at the bottom of the tube and two layers on the other sections and is composed of hydroxyapatite, calcium oxalate dihydrate and an amorphous material. The proportions of these phases varied along the tube height. Energy-dispersive spectroscopy and XRD analysis on the surfaces of the outermost and innermost layers showed that hydroxyapatite was the major phase attached to the tube wall, while calcium oxalate dihydrate (with pits and voids) was the major phase on the juice side. Elemental mapping of the cross-sections of the deposit revealed the presence of a mineral, Si-Mg-Al-Fe-O, which is probably a silicate mineral. Reasons for the defects in the oxalate crystal surfaces, the differences in the crystal size distribution from bottom to the top of the tube and the composite fouling process have been postulated.
Resumo:
This paper takes Kent and Taylor’s (2002) call to develop a dialogic theory of public relations and suggests that a necessary first step is the modelling of the process of dialogic communication in public relations. In order to achieve this, extant literature from a range of fields is reviewed, seeking to develop a definition of dialogic communication that is meaningful to the practice of contemporary public relations. A simple transmission model of communication is used as a starting point. This is synthesised with concepts relating specifically to dialogue, taken here in its broadest sense rather than defined as any one particular outcome. The definition that emerges from this review leads to the conclusion that dialogic communication in public relations involves the interaction of three roles – those of sender, receiver, and responder. These three roles are shown to be adopted at different times by both participants involved in dialogic communication. It is further suggested that variations occur in how these roles are conducted: the sender and receiver roles can be approached in a passive or an active way, while the responder role can be classified as being either resistant or responsive to the information received in dialogic communication. The final modelling of the definition derived provides a framework which can be tested in the field to determine whether variations in the conduct of the roles in dialogic communication actually exist, and if so, whether they can be linked to the different types of outcome from dialogic communication identified previously in the literature.
Resumo:
A number of series of poly(acrylic acids) (PAA) of differing end-groups and molecular mass were used to study the inhibition of calcium oxalate crystallization. The effects of the end-group on crystal speciation and morphology were significant and dramatic, with hexyl-isobutyrate end groups giving preferential formation of calcium oxalate dihydrate (COD) rather than the more stable calcium oxalate monohydrate (COM), while both more hydrophobic end-groups and less-hydrophobic end groups led predominantly to formation of the least thermodynamically stable form of calcium oxalate, calcium oxalate trihydrate. Conversely, molecular mass had little impact on calcium oxalate speciation or crystal morphology. It is probable that the observed effects are related to the rate of desorption of the PAA moiety from the crystal (lite) surfaces and that the results point to a major role for end-group as well as molecular mass in controlling desorption rate.
Resumo:
ROLE OF LOW AFFINITY β1-ADRENERGIC RECEPTOR IN NORMAL AND DISEASED HEARTS Background: The β1-adrenergic receptor (AR) has at least two binding sites, 1HAR and 1LAR (high and low affinity site of the 1AR respectively) which cause cardiostimulation. Some β-blockers, for example (-)-pindolol and (-)-CGP 12177 can activate β1LAR at higher concentrations than those required to block β1HAR. While β1HAR can be blocked by all clinically used β-blockers, β1LAR is relatively resistant to blockade. Thus, chronic β1LAR activation may occur in the setting of β-blocker therapy, thereby mediating persistent βAR signaling. Thus, it is important to determine the potential significance of β1LAR in vivo, particularly in disease settings. Method and result: C57Bl/6 male mice were used. Chronic (4 weeks) β1LAR activation was achieved by treatment with (-)-CGP12177 via osmotic minipump. Cardiac function was assessed by echocardiography and catheterization. (-)-CGP12177 treatment in healthy mice increased heart rate and left ventricular (LV) contractility without detectable LV remodelling or hypertrophy. In mice subjected to an 8-week period of aorta banding, (-)-CGP12177 treatment given during 4-8 weeks led to a positive inotropic effect. (-)-CGP12177 treatment exacerbated LV remodelling indicated by a worsening of LV hypertrophy by ??% (estimated by weight, wall thickness, cardiomyocyte size) and interstitial/perivascular fibrosis (by histology). Importantly, (-)-CGP12177 treatment to aorta banded mice exacerbated cardiac expression of hypertrophic, fibrogenic and inflammatory genes (all p<0.05 vs. non-treated control with aorta banding).. Conclusion: β1LAR activation provides functional support to the heart, in both normal and diseased (pressure overload) settings. Sustained β1LAR activation in the diseased heart exacerbates LV remodelling and therefore may promote disease progression from compensatory hypertrophy to heart failure. Word count: 270
Resumo:
This book disseminates current information pertaining to the modulatory effects of foods and other food substances on behavior and neurological pathways and, importantly, vice versa. This ranges from the neuroendocrine control of eating to the effects of life-threatening disease on eating behavior. The importance of this contribution to the scientific literature lies in the fact that food and eating are an essential component of cultural heritage but the effects of perturbations in the food/cognitive axis can be profound. The complex interrelationship between neuropsychological processing, diet, and behavioral outcome is explored within the context of the most contemporary psychobiological research in the area. This comprehensive psychobiology- and pathology-themed text examines the broad spectrum of diet, behavioral, and neuropsychological interactions from normative function to occurrences of severe and enduring psychopathological processes
Resumo:
The paper proposes a solution for testing of a physical distributed generation system (DGs) along with a computer simulated network. The computer simulated network is referred as the virtual grid in this paper. Integration of DG with the virtual grid provides broad area of testing of power supplying capability and dynamic performance of a DG. It is shown that a DG can supply a part of load power while keeping Point of Common Coupling (PCC) voltage magnitude constant. To represent the actual load, a universal load along with power regenerative capability is designed with the help of voltage source converter (VSC) that mimics the load characteristic. The overall performance of the proposed scheme is verified using computer simulation studies.
Resumo:
Hand-held mobile phone use while driving is illegal throughout Australia yet many drivers persist with this behaviour. This study aims to understand the internal, driver-related and external, situational-related factors influencing drivers’ willingness to use a hand-held mobile phone while driving. Sampling 160 university students, this study utilised the Theory of Planned Behaviour (TPB) to examine a range of belief-based constructs. Additionally, drivers’ personality traits of neuroticism and extroversion were measured with the Neuroticism Extroversion Openness-Five Factor Inventory (NEO-FFI). In relation to the external, situational-related factors, four different driving-related scenarios, which were intended to evoke differing levels of drivers’ reported stress, were devised for the study and manipulated drivers’ time urgency (low versus high) and passenger presence (alone versus with friends). In these scenarios, drivers’ willingness to use a mobile phone in general was measured. Hierarchical regression analyses across the four different driving scenarios found that, overall, the TPB components significantly accounted for drivers’ willingness to use a mobile phone above and beyond the demographic variables. Subjective norms, however, was only a significant predictor of drivers’ willingness in situations where the drivers were driving alone. Generally, neuroticism and extroversion did not significantly predict drivers’ willingness above and beyond the TPB and demographic variables. Overall, the findings broaden our understanding of the internal and external factors influencing drivers’ willingness to use a hand-held mobile phone while driving despite the illegality of this behaviour. The findings may have important practical implications in terms of better informing road safety campaigns targeting drivers’ mobile phone use which, in turn, may contribute to a reduction in the extent that mobile phone use contributes to road crashes.
Resumo:
Purpose – In recent years, knowledge-based urban development (KBUD) has introduced as a new strategic development approach for the regeneration of industrial cities. It aims to create a knowledge city consists of planning strategies, IT networks and infrastructures that achieved through supporting the continuous creation, sharing, evaluation, renewal and update of knowledge. Improving urban amenities and ecosystem services by creating sustainable urban environment is one of the fundamental components for KBUD. In this context, environmental assessment plays an important role in adjusting urban environment and economic development towards a sustainable way. The purpose of this paper is to present the role of assessment tools for environmental decision making process of knowledge cities. Design/methodology/approach – The paper proposes a new assessment tool to figure a template of a decision support system which will enable to evaluate the possible environmental impacts in an existing and future urban context. The paper presents the methodology of the proposed model named ‘ASSURE’ which consists of four main phases. Originality/value –The proposed model provides a useful guidance to evaluate the urban development and its environmental impacts to achieve sustainable knowledge-based urban futures. Practical implications – The proposed model will be an innovative approach to provide the resilience and function of urban natural systems secure against the environmental changes while maintaining the economic development of cities.
Resumo:
The paper documents the development of an ethical framework for my current PhD project. I am a practice-led researcher with a background in creative writing. My project invovles conducting a number of oral history interviews with individuals living in Brisbane, Queensland, Australia. I use the interviews to inform a novel set in Brisbane. In doing so, I hope to provide a lens into a cultural and historical space by creating a rich, textured and vivid narrative while still retaining some of the essential aspects of the oral history. While developing a methodology for fictionalising these oral histories, I have encountered a derserve range of ethical issues. In particular I have had to confront my role as a writer and researcher working with other people’s stories. In order to grapple with the complex ethics of such an engagment, I examine the devices and stratedgies employed by other creative practioners working in similar fields. I focus chielfy on Miguel Barnet’s Biography of a Runaway Slave (published in English in 1968) Dave Eggers’What is the what: The autobiography of Valentino Achek Deng, a novel (2005) in order to understand the complex processes of mediation invloved in the artful shaping of oral histories. The paper explores how I have confronted and resolved ethical considerations in my theoretical and creative work.
Resumo:
The contribution of risky behaviour to the increased crash and fatality rates of young novice drivers is recognised in the road safety literature around the world. Exploring such risky driver behaviour has led to the development of tools like the Driver Behaviour Questionnaire (DBQ) to examine driving violations, errors, and lapses [1]. Whilst the DBQ has been utilised in young novice driver research, some items within this tool seem specifically designed for the older, more experienced driver, whilst others appear to asses both behaviour and related motives. The current study was prompted by the need for a risky behaviour measurement tool that can be utilised with young drivers with a provisional driving licence. Sixty-three items exploring young driver risky behaviour developed from the road safety literature were incorporated into an online survey. These items assessed driver, passenger, journey, car and crash-related issues. A sample of 476 drivers aged 17-25 years (M = 19, SD = 1.59 years) with a provisional driving licence and matched for age, gender, and education were drawn from a state-wide sample of 761 young drivers who completed the survey. Factor analysis based upon a principal components extraction of factors was followed by an oblique rotation to investigate the underlying dimensions to young novice driver risky behaviour. A five factor solution comprising 44 items was identified, accounting for 55% of the variance in young driver risky behaviour. Factor 1 accounted for 32.5% of the variance and appeared to measure driving violations that were transient in nature - risky behaviours that followed risky decisions that occurred during the journey (e.g., speeding). Factor 2 accounted for 10.0% of variance and appeared to measure driving violations that were fixed in nature; the risky decisions being undertaken before the journey (e.g., drink driving). Factor 3 accounted for 5.4% of variance and appeared to measure misjudgment (e.g., misjudged speed of oncoming vehicle). Factor 4 accounted for 4.3% of variance and appeared to measure risky driving exposure (e.g., driving at night with friends as passengers). Factor 5 accounted for 2.8% of variance and appeared to measure driver emotions or mood (e.g., anger). Given that the aim of the study was to create a research tool, the factors informed the development of five subscales and one composite scale. The composite scale had a very high internal consistency measure (Cronbach’s alpha) of .947. Self-reported data relating to police-detected driving offences, their crash involvement, and their intentions to break road rules within the next year were also collected. While the composite scale was only weakly correlated with self-reported crashes (r = .16, p < .001), it was moderately correlated with offences (r = .26, p < .001), and highly correlated with their intentions to break the road rules (r = .57, p < .001). Further application of the developed scale is needed to confirm the factor structure within other samples of young drivers both in Australia and in other countries. In addition, future research could explore the applicability of the scale for investigating the behaviour of other types of drivers.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Composite web services comprise several component web services. When a composite web service is executed centrally, a single web service engine is responsible for coordinating the execution of the components, which may create a bottleneck and degrade the overall throughput of the composite service when there are a large number of service requests. Potentially this problem can be handled by decentralizing execution of the composite web service, but this raises the issue of how to partition a composite service into groups of component services such that each group can be orchestrated by its own execution engine while ensuring acceptable overall throughput of the composite service. Here we present a novel penalty-based genetic algorithm to solve the composite web service partitioning problem. Empirical results show that our new algorithm outperforms existing heuristic-based solutions.
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.