501 resultados para personal data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The story of the fall of the Berlin Wall was an aspect of the “imagination gap” that we had to wrestle with as journalists covering the collapse of the Eastern Bloc in Europe. It was scarcely possible to believe what you found yourself reporting, and that work became a two-track process. On one hand a mass social movement was dictating the pace and direction of events; on the other, the institutional business of politics as usual, to provide a framework for all the change that was happening, had to be managed – and reported on. In later analyseds we could see, that crisis in the Soviet Union led to the crisis over the Berlin Wall; and from the fall of the Wall, came Germany’s reunification, and with that also, formation of the European Union as it is today. The government of the Federal Republic of Germany convinced its neighbours that a reunited Germany, within an expanded EU, would be a very acceptable “European Germany” -- not the leader of a “German Europe”. It committed itself financially, supporting the new Euro currency. The former communist states of Eastern Europe demanded to join and expand the EU; in order to remove themselves from the Soviet Union, enjoy human rights, and share in Western prosperity. So today, following on from the events of 1989, the European Union is an amalgam of 27 member countries, with close to 500 million citizens and accounting for 30 % of world Gross National Product.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hazard and reliability prediction of an engineering asset is one of the significant fields of research in Engineering Asset Health Management (EAHM). In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset can be influenced and/or indicated by different factors that are termed as covariates. The Explicit Hazard Model (EHM) as a covariate-based hazard model is a new approach for hazard prediction which explicitly incorporates both internal and external covariates into one model. EHM is an appropriate model to use in the analysis of lifetime data in presence of both internal and external covariates in the reliability field. This paper presents applications of the methodology which is introduced and illustrated in the theory part of this study. In this paper, the semi-parametric EHM is applied to a case study so as to predict the hazard and reliability of resistance elements on a Resistance Corrosion Sensor Board (RCSB).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for large scale environmental monitoring to manage environmental change is well established. Ecologists have long used acoustics as a means of monitoring the environment in their field work, and so the value of an acoustic environmental observatory is evident. However, the volume of data generated by such an observatory would quickly overwhelm even the most fervent scientist using traditional methods. In this paper we present our steps towards realising a complete acoustic environmental observatory - i.e. a cohesive set of hardware sensors, management utilities, and analytical tools required for large scale environmental monitoring. Concrete examples of these elements, which are in active use by ecological scientists, are also presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports preliminary results from a study modeling the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Study participants conducted three Web searches on personal information problems. Data collection techniques included pre- and post-search questionnaires; think-aloud protocols, Web search logs, observation, and post-search interviews. Key findings include: (1) users Web searches included multitasking, cognitive shifting and cognitive coordination processes, (2) cognitive coordination is the hinge linking multitasking and cognitive shifting that enables Web search construction, (3) cognitive shift levels determine the process of cognitive coordination, and (4) cognitive coordination is interplay of task, mechanism and strategy levels that underpin multitasking and task switching. An initial model depicts the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Implications of the findings and further research are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light Detection and Ranging (LIDAR) has great potential to assist vegetation management in power line corridors by providing more accurate geometric information of the power line assets and vegetation along the corridors. However, the development of algorithms for the automatic processing of LIDAR point cloud data, in particular for feature extraction and classification of raw point cloud data, is in still in its infancy. In this paper, we take advantage of LIDAR intensity and try to classify ground and non-ground points by statistically analyzing the skewness and kurtosis of the intensity data. Moreover, the Hough transform is employed to detected power lines from the filtered object points. The experimental results show the effectiveness of our methods and indicate that better results were obtained by using LIDAR intensity data than elevation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fatigue and overwork are problems experienced by numerous employees in many industry sectors. Focusing on improving work-life balance can frame the ‘problem’ of long work hours to resolve working time duration issues. Flexible work options through re-organising working time arrangements is key to developing an organisational response for delivering work-life balance and usually involves changing the internal structure of work time. This study examines the effect of compressed long weekly working hours and the consequent ‘long break’ on work-life balance. Using Spillover theory and Border theory, this research considers organisational and personal determinants of overwork and fatigue. It concludes compressed long work hours with a long break provide better work-life balance. Further, a long break allows gaining ‘personal time’ and overcoming fatigue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an efficient and low-complexity scheme for estimating and compensating clipping noise in OFDMA systems. Conventional clipping noise estimation schemes, which need all demodulated data symbols, may become infeasible in OFDMA systems where a specific user may only know his own modulation scheme. The proposed scheme first uses equalized output to identify a limited number of candidate clips, and then exploits the information on known subcarriers to reconstruct clipped signal. Simulation results show that the proposed scheme can significantly improve the system performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A few studies examined interactive effects between air pollution and temperature on health outcomes. This study is to examine if temperature modified effects of ozone and cardiovascular mortality in 95 large US cities. A nonparametric and a parametric regression models were separately used to explore interactive effects of temperature and ozone on cardiovascular mortality during May and October, 1987-2000. A Bayesian meta-analysis was used to pool estimates. Both models illustrate that temperature enhanced the ozone effects on mortality in the northern region, but obviously in the southern region. A 10-ppb increment in ozone was associated with 0.41 % (95% posterior interval (PI): -0.19 %, 0.93 %), 0.27 % (95% PI: -0.44 %, 0.87 %) and 1.68 % (95% PI: 0.07 %, 3.26 %) increases in daily cardiovascular mortality corresponding to low, moderate and high levels of temperature, respectively. We concluded that temperature modified effects of ozone, particularly in the northern region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Total deposition of petrol, diesel and environmental tobacco smoke (ETS) aerosols in the human respiratory tract for nasal breathing conditions was computed for 14 nonsmoking volunteers, considering the specific anatomical and respiratory parameters of each volunteer and the specific size distribution for each inhalation experiment. Theoretical predictions were 34.6% for petrol, 24.0% for diesel, and 18.5% for ETS particles. Compared to the experimental results, predicted deposition values were consistently smaller than the measured data (41.4% for petrol, 29.6% for diesel, and 36.2% for ETS particles). The apparent discrepancy between experimental data on total deposition and modeling results may be reconciled by considering the non-spherical shape of the test aerosols by diameter-dependent dynamic shape factors to account for differences between mobility-equivalent and volume-equivalent or thermodynamic diameters. While the application of dynamic shape factors is able to explain the observed differences for petrol and diesel particles, additional mechanisms may be required for ETS particle deposition, such as the size reduction upon inspiration by evaporation of volatile compounds and/or condensation-induced restructuring, and, possibly, electrical charge effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The wide range of contributing factors and circumstances surrounding crashes on road curves suggest that no single intervention can prevent these crashes. This paper presents a novel methodology, based on data mining techniques, to identify contributing factors and the relationship between them. It identifies contributing factors that influence the risk of a crash. Incident records, described using free text, from a large insurance company were analysed with rough set theory. Rough set theory was used to discover dependencies among data, and reasons using the vague, uncertain and imprecise information that characterised the insurance dataset. The results show that male drivers, who are between 50 and 59 years old, driving during evening peak hours are involved with a collision, had a lowest crash risk. Drivers between 25 and 29 years old, driving from around midnight to 6 am and in a new car has the highest risk. The analysis of the most significant contributing factors on curves suggests that drivers with driving experience of 25 to 42 years, who are driving a new vehicle have the highest crash cost risk, characterised by the vehicle running off the road and hitting a tree. This research complements existing statistically based tools approach to analyse road crashes. Our data mining approach is supported with proven theory and will allow road safety practitioners to effectively understand the dependencies between contributing factors and the crash type with the view to designing tailored countermeasures.