300 resultados para Machine of 360°


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is consistent evidence showing that driver behaviour contributes to crashes and near miss incidents at railway level crossings (RLXs). The development of emerging Vehicle-to-Vehicle and Vehicle-to-Infrastructure technologies is a highly promising approach to improve RLX safety. To date, research has not evaluated comprehensively the potential effects of such technologies on driving behaviour at RLXs. This paper presents an on-going research programme assessing the impacts of such new technologies on human factors and drivers’ situational awareness at RLX. Additionally, requirements for the design of such promising technologies and ways to display safety information to drivers were systematically reviewed. Finally, a methodology which comprehensively assesses the effects of in-vehicle and road-based interventions warning the driver of incoming trains at RLXs is discussed, with a focus on both benefits and potential negative behavioural adaptations. The methodology is designed for implementation in a driving simulator and covers compliance, control of the vehicle, distraction, mental workload and drivers’ acceptance. This study has the potential to provide a broad understanding of the effects of deploying new in-vehicle and road-based technologies at RLXs and hence inform policy makers on safety improvements planning for RLX.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction industry is an industry of major strategic importance. Its level of productivity has a significant effect on national economic growth. Productivity indicators are examined. The indicators consist of labour productivity, capital productivity, labour competitiveness, capital intensity and added value content of data, which are obtained from the published census/biannual surveys of the construction industry between the years 1999 and 2011 from the Department of Statistics of Malaysia. The results indicated that there is an improvement in the labour productivity, but the value-added content is declining. The civil engineering and special trades subsectors are more productive than the residential and non-residential subsectors in terms of labour productivity because machine-for-labour substitution is a more important process in those subsectors. The capital-intensive characteristics of civil engineering and special trade works enable these subsectors to achieve higher added value per labour cost but not the capital productivity. The added value per labour cost is lower in larger organizations despite higher capital productivity. However, the capital intensity is lower and unit labour cost is higher in the larger organizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher ambient temperatures will increase heat stress on workers, leading to impacts upon their individual health and productivity. In particular, research has indicated that higher ambient temperatures can increase the prevalence of urolithiasis. This thesis examines the relationship between ambient heat exposure and urolithiasis among outdoor workers in a shipbuilding company in Guangzhou, China, and makes recommendations for minimising the possible impacts of high ambient temperatures on urolithiasis. A retrospective 1:4 matched case-control study was performed to investigate the association between ambient heat exposure and urolithiasis. Ambient heat exposure was characterised by total exposure time, type of work, department and length of service. The data were obtained from the affiliated hospital of the shipbuilding company under study for the period 2003 to 2010. A conditional logistic regression model was used to estimate the association between heat exposure and urolithiasis. This study found that the odds ratio (OR) of urolithiasis for total exposure time was 1.5 (95% confidence interval (CI): 1.2–1.8). Eight types of work in the shipbuilding company were investigated, including welder, assembler, production security and quality inspector, planing machine operator, spray painter, gas-cutting worker and indoor employee. Five out of eight types of work had significantly higher risks for urolithiasis, and four of the five mainly consisted of outdoors work with ORs of 4.4 (95% CI: 1.7–11.4) for spray painter, 3.8 (95% CI: 1.9–7.2) for welder, 2.7 (95% CI: 1.4–5.0) for production security and quality inspector, and 2.2 (95% CI: 1.1–4.3) for assembler, compared to the reference group (indoor employee). Workers with abnormal blood pressure (hypertension) were more likely to have urolithiasis with an OR of 1.6 (95% CI: 1.0–2.5) compared to those without hypertension. This study contributes to the understanding of the association between ambient heat exposure and urolithiasis among outdoor workers in China. In the context of global climate change, this is particularly important because rising temperatures are expected to increase the prevalence of urolithiasis among outdoor workers, putting greater pressure on productivity, occupational health management and health care systems. The results of this study have clear implications for public health policy and planning, as they indicate that more attention is required to protect outdoor workers from heat-related urolithiasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human papillomaviruses (HPV) are responsible for the most common human sexually transmitted viral infections, and high-risk types are responsible for causing cervical and other cancers. The minor capsid protein L2 of HPV plays important roles in virus entry into cells, localisation of viral components to the nucleus, in DNA binding, capsid formation and stability. It also elicits antibodies that are more cross-reactive between HPV types than does the major capsid protein L1, making it an attractive potential target for new-generation, more broadly protective subunit vaccines against HPV infections. However, its low abundance in natural capsids-12-72 molecules per 360 copies of L1-limits its immunogenicity. This review will explore the biological roles of the protein, and prospects for its use in new vaccines. © 2009 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND/OBJECTIVES: This paper reports on the evaluation of the Smart Choices healthy food and drink supply strategy for Queensland schools (Smart Choices) implementation across the whole school environment in state government primary and secondary schools in Queensland, Australia. SUBJECTS/METHODS: Three concurrent surveys using different methods for each group of stakeholders that targeted all 1275 school Principals, all 1258 Parent and Citizens’ Associations (P&Cs) and a random sample of 526 tuckshop convenors throughout Queensland. Nine hundred and seventy-three Principals, 598 P&Cs and 513 tuckshop convenors participated with response rates of 78%, 48% and 98%, respectively. RESULTS: Nearly all Principals (97%), P&Cs (99%) and tuckshop convenors (97%) reported that their school tuckshop had implemented Smart Choices. The majority of Principals and P&Cs reported implementation, respectively, in: school breakfast programs (98 and 92%); vending machine stock (94 and 83%); vending machine advertising (85 and 84%); school events (87 and 88%); school sporting events (81 and 80%); sponsorship and advertising (93 and 84%); fundraising events (80 and 84%); and sporting clubs (73 and 75%). Implementation in curriculum activities, classroom rewards and class parties was reported, respectively, by 97%, 86% and 75% of Principals. Respondents also reported very high levels of understanding of Smart Choices and engagement of the school community. CONCLUSIONS: The results demonstrated that food supply interventions to promote nutrition across all domains of the school environment can be implemented successfully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analytical model to study the effect of stiffening ribs on vibration transmission between two rectangular plates coupled at right angle. Interesting wave attenuation patterns were observed by placing the stiffening rib either on the source or on the receiving plate. The result can be used to improve the understanding of vibration and for vibration control of more complex structures such as transformer tanks and machine covers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

National flag carriers are struggling for survival, not only due to classical reasons such as increase in fuel and tax or natural disasters, but largely due to the inability to quickly adapt to its competitive environment – the emergence of budget and Persian Gulf airlines. In this research, we investigate how airlines can transform their business models via technological and strategic capabilities to become profitable and sustainable passenger experience companies. To formulate recommendations, we analyze customer sentiments via social media to understand what people are saying about the airlines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Continued aging of the population is expected to be accompanied by substantial increases in the number of people with dementia and in the number of health care staff required to care for them. Adequate knowledge about dementia among health care staff is important to the quality of care delivered to this vulnerable population. The purpose of this study was to assess knowledge about dementia across a range of health care staff in a regional health service district. Methods Knowledge levels were investigated via the validated 30-item Alzheimer's Disease Knowledge Scale (ADKS). All health service district staff with e-mail access were invited to participate in an online survey. Knowledge levels were compared across demographic categories, professional groups, and by whether the respondent had any professional or personal experience caring for someone with dementia. The effect of dementia-specific training or education on knowledge level was also evaluated. Results A diverse staff group (N = 360), in terms of age, professional group (nursing, medicine, allied health, support staff) and work setting from a regional health service in Queensland, Australia responded. Overall knowledge about Alzheimer's disease was of a generally moderate level with significant differences being observed by professional group and whether the respondent had any professional or personal experience caring for someone with dementia. Knowledge was lower for some of the specific content domains of the ADKS, especially those that were more medically-oriented, such as 'risk factors' and 'course of the disease.' Knowledge was higher for those who had experienced dementia-specific training, such as attendance at a series of relevant workshops. Conclusions Specific deficits in dementia knowledge were identified among Australian health care staff, and the results suggest dementia-specific training might improve knowledge. As one piece of an overall plan to improve health care delivery to people with dementia, this research supports the role of introducing systematic dementia-specific education or training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation, and can also improve productivity and enhance system safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and an assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. All machine components are subjected to degradation processes in real environments and they have certain failure characteristics which can be related to the operating conditions. This paper describes a technique for accurate assessment of the remnant life of machines based on health state probability estimation and involving historical knowledge embedded in the closed loop diagnostics and prognostics systems. The technique uses a Support Vector Machine (SVM) classifier as a tool for estimating health state probability of machine degradation, which can affect the accuracy of prediction. To validate the feasibility of the proposed model, real life historical data from bearings of High Pressure Liquefied Natural Gas (HP-LNG) pumps were analysed and used to obtain the optimal prediction of remaining useful life. The results obtained were very encouraging and showed that the proposed prognostic system based on health state probability estimation has the potential to be used as an estimation tool for remnant life prediction in industrial machinery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. For interactive systems, recognition, reproduction, and generalization of observed motion data are crucial for successful interaction. In this paper, we present a novel method for analysis of motion data that we refer to as K-OMM-trees. K-OMM-trees combine Ordered Means Models (OMMs) a model-based machine learning approach for time series with an hierarchical analysis technique for very large data sets, the K-tree algorithm. The proposed K-OMM-trees enable unsupervised prototype extraction of motion time series data with hierarchical data representation. After introducing the algorithmic details, we apply the proposed method to a gesture data set that includes substantial inter-class variations. Results from our studies show that K-OMM-trees are able to substantially increase the recognition performance and to learn an inherent data hierarchy with meaningful gesture abstractions.