11 resultados para Data-communication

em University of Queensland eSpace - Australia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explored the impact of downsizing on levels of uncertainty, coworker and management trust, and communicative effectiveness in a health care organization downsizing during a 2-year period from 660 staff to 350 staff members. Self-report data were obtained from employees who were staying (survivors), from employees were being laid off (victims), and from employees with and without managerial responsibilities. Results indicated that downsizing had a similar impact on the amount of trust that survivors and victims had for management. However, victims reported feeling lower levels of trust toward their colleagues compared with survivors. Contrary to expectations, survivors and victims reported similar perceptions of job and organizational uncertainty and similar levels of information received about changes. Employees with no management responsibilities and middle managers both reported lower scores than did senior managers on all aspects of information received. Implications for practice and the management of the communication process are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As humans expand into space communities will form. These have already begun to form in small ways, such as long-duration missions on the International Space Station and the space shuttle, and small-scale tourist excursions into space. Social, behavioural and communications data emerging from such existing communities in space suggest that the physically-bounded, work-oriented and traditionally male-dominated nature of these extremely remote groups present specific problems for the resident astronauts, groups of them viewed as ‘communities’, and their associated groups who remain on Earth, including mission controllers, management and astronauts’ families. Notionally feminine group attributes such as adaptive competence, social adaptation skills and social sensitivity will be crucial to the viability of space communities and in the absence of gender equity, ‘staying in touch’ by means of ‘news from home’ becomes more important than ever. A template of news and media forms and technologies is suggested to service those needs and enhance the social viability of future terraforming activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Networked information and communication technologies are rapidly advancing the capacities of governments to target and separately manage specific sub-populations, groups and individuals. Targeting uses data profiling to calculate the differential probabilities of outcomes associated with various personal characteristics. This knowledge is used to classify and sort people for differentiated levels of treatment. Targeting is often used to efficiently and effectively target government resources to the most disadvantaged. Although having many benefits, targeting raises several policy and ethical issues. This paper discusses these issues and the policy responses governments may take to maximise the benefits of targeting while ameliorating the negative aspects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the third article of a series entitled Astronauts as Audiences. In this article, we investigate the roles that situation awareness (SA), communications, and reality TV (including media communications) might have on the lives of astronauts in remote space communities. We examined primary data about astronauts’ living and working environments, applicable theories of SA, communications, and reality TV (including media communications). We then surmised that the collective application of these roles might be a means of enhancing the lives of astronauts in remote space communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: An estimation of cut-off points for the diagnosis of diabetes mellitus (DM) based on individual risk factors. Methods: A subset of the 1991 Oman National Diabetes Survey is used, including all patients with a 2h post glucose load >= 200 mg/dl (278 subjects) and a control group of 286 subjects. All subjects previously diagnosed as diabetic and all subjects with missing data values were excluded. The data set was analyzed by use of the SPSS Clementine data mining system. Decision Tree Learners (C5 and CART) and a method for mining association rules (the GRI algorithm) are used. The fasting plasma glucose (FPG), age, sex, family history of diabetes and body mass index (BMI) are input risk factors (independent variables), while diabetes onset (the 2h post glucose load >= 200 mg/dl) is the output (dependent variable). All three techniques used were tested by use of crossvalidation (89.8%). Results: Rules produced for diabetes diagnosis are: A- GRI algorithm (1) FPG>=108.9 mg/dl, (2) FPG>=107.1 and age>39.5 years. B- CART decision trees: FPG >=110.7 mg/dl. C- The C5 decision tree learner: (1) FPG>=95.5 and 54, (2) FPG>=106 and 25.2 kg/m2. (3) FPG>=106 and =133 mg/dl. The three techniques produced rules which cover a significant number of cases (82%), with confidence between 74 and 100%. Conclusion: Our approach supports the suggestion that the present cut-off value of fasting plasma glucose (126 mg/dl) for the diagnosis of diabetes mellitus needs revision, and the individual risk factors such as age and BMI should be considered in defining the new cut-off value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although managers consider accurate, timely, and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. Over a period of twelve months, the action research project reported herein attempted to investigate and track data quality initiatives undertaken by the participating organisation. The investigation focused on two types of errors: transaction input errors and processing errors. Whenever the action research initiative identified non-trivial errors, the participating organisation introduced actions to correct the errors and prevent similar errors in the future. Data quality metrics were taken quarterly to measure improvements resulting from the activities undertaken during the action research project. The action research project results indicated that for a mission-critical database to ensure and maintain data quality, commitment to continuous data quality improvement is necessary. Also, communication among all stakeholders is required to ensure common understanding of data quality improvement goals. The action research project found that to further substantially improve data quality, structural changes within the organisation and to the information systems are sometimes necessary. The major goal of the action research study is to increase the level of data quality awareness within all organisations and to motivate them to examine the importance of achieving and maintaining high-quality data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.