94 resultados para Systems and Integrative Physiology
Resumo:
Work environments have previously been studied to identify the strategies, structures and processes which increase the likelihood of creativity, innovation and collaboration for productive workplaces. A number of perspectives have emerged which identify social and cognitive factors known to contribute to or to restrict innovation and collaboration. Recently more attention has been given to designing physical environments to encourage processes relevant to innovation such as creativity (McCoy & Evans, 2002) knowledge sharing (Hemlin, Allwood & Martin, 2008) and collaboration (Bozeman & Corley, 2004). Some attention has been given specifically to research and development environments (Boutellier et al, 2008) but little integration of this research has occurred. In the context of the construction of new purpose-built premises which will bring together under one roof separate public sector agencies engaged in research and development in agriculture, natural resource systems and the environment, this paper examines the extant literature and develops initial propositions for research relevant to the transition, collaboration and performance of research and development in new organizational environments where traditional boundaries have been redrawn.
Resumo:
Process Control Systems (PCSs) or Supervisory Control and Data Acquisition (SCADA) systems have recently been added to the already wide collection of wireless sensor networks applications. The PCS/SCADA environment is somewhat more amenable to the use of heavy cryptographic mechanisms such as public key cryptography than other sensor application environments. The sensor nodes in the environment, however, are still open to devastating attacks such as node capture, which makes designing a secure key management challenging. In this paper, a key management scheme is proposed to defeat node capture attack by offering both forward and backward secrecies. Our scheme overcomes the pitfalls which Nilsson et al.'s scheme suffers from, and is not more expensive than their scheme.
Resumo:
This paper is the second in a pair that Lesh, English, and Fennewald will be presenting at ICME TSG 19 on Problem Solving in Mathematics Education. The first paper describes three shortcomings of past research on mathematical problem solving. The first shortcoming can be seen in the fact that knowledge has not accumulated – in fact it has atrophied significantly during the past decade. Unsuccessful theories continue to be recycled and embellished. One reason for this is that researchers generally have failed to develop research tools needed to reliably observe, document, and assess the development of concepts and abilities that they claim to be important. The second shortcoming is that existing theories and research have failed to make it clear how concept development (or the development of basic skills) is related to the development of problem solving abilities – especially when attention is shifted beyond word problems found in school to the kind of problems found outside of school, where the requisite skills and even the questions to be asked might not be known in advance. The third shortcoming has to do with inherent weaknesses in observational studies and teaching experiments – and the assumption that a single grand theory should be able to describe all of the conceptual systems, instructional systems, and assessment systems that strongly molded and shaped by the same theoretical perspectives that are being used to develop them. Therefore, this paper will describe theoretical perspectives and methodological tools that are proving to be effective to combat the preceding kinds or shortcomings. We refer to our theoretical framework as models & modeling perspectives (MMP) on problem solving (Lesh & Doerr, 2003), learning, and teaching. One of the main methodologies of MMP is called multi-tier design studies (MTD).
Resumo:
This chapter elucidates key ideas behind neurocomputational and ecological dynamics and perspectives of understanding the organisation of action in complex neurobiological systems. The need to study the close link between neurobiological systems and their environments (particularly their sensory and movement subsystems and the surrounding energy sources) is advocated. It is proposed how degeneracy in complex neurobiological systems provides the basis for functional variability in organisation of action. In such systems processes of cognition and action facilitate the specific interactions of each performer with particular task and environmental constraints.
Resumo:
Over the past two decades the quality assurance of higher education institutions has captured a growing interest as evidenced by the increasing number of national and transnational bodies engaged in this area. Yet as the first decade of the 21st century draws to a close, higher education systems and thus the regimes designed to ensure their quality are faced with significant complexity. Issues of accountability, authority and responsibility are paramount when responding to industry bodies, to globalisation and the transnational provision of higher education and to the use of market mechanisms. In this paper we raise some of the challenges for quality assurance for higher education presented by this growing complexity through the question, quality assurance in higher education, for whom and of what, highlighting our concern for a need to expand the centrality of accountability to include authority and responsibility as part of the quality assurance regimes for higher education
Resumo:
3D Virtual Environments (VE) are real; they exist as digital worlds with the advantage of having none of the constraints of the real world. As such they are the perfect training ground for design students who can create, build and experiment with design solutions without the constraint of real world projects. This paper reports on an educational setting used to explore a model for using VE such as Second Life (SL) developed by Linden Labs in California, as a collaborative environment for design education. A postgraduate landscape architecture learning environment within a collaborative design unit was developed to integrate this model where the primary focus was the application of three-dimensional tools within design, not as a presentation tool, but rather as a design tool. The focus of the unit and its aims and objectives will be outlined before describing the use of SL in the unit. Attention is focused on the collaboration and learning experience before discussing the outcomes, student feedback, future projects using this model and potential for further research. The outcome of this study aims to contribute to current research on teaching and learning design in interactive VE’s. We present a case study of our first application of this model.
Resumo:
Crash risk is the statistical probability of a crash. Its assessment can be performed through ex post statistical analysis or in real-time with on-vehicle systems. These systems can be cooperative. Cooperative Vehicle-Infrastructure Systems (CVIS) are a developing research avenue in the automotive industry worldwide. This paper provides a survey of existing CVIS systems and methods to assess crash risk with them. It describes the advantages of cooperative systems versus non-cooperative systems. A sample of cooperative crash risk assessment systems is analysed to extract vulnerabilities according to three criteria: market penetration, over-reliance on GPS and broadcasting issues. It shows that cooperative risk assessment systems are still in their infancy and requires further development to provide their full benefits to road users.
Resumo:
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
Resumo:
Organizations invest heavily in Customer Relationship Management (CRM) and Supply Chain Management (SCM) systems, and their related infrastructure, presumably expecting positive benefits to the organization. Assessing the benefits of such applications is an important aspect of managing such systems. Considering the salient differences between CRM and SCM applications with other intra-organizational applications, existing Information Systems benefits measurement models and frameworks are ill-suited to gauge benefits of inter-organizational systems. This paper reports the preliminary findings of a measurement model developed to assess benefits of CRM and SCM applications. The preliminary model, which reflects the characteristics of the Analytic Theory, is derived using a review of 55 academic studies and 44 papers from the practice. Six hundred and six identified benefits were then synthesized in to 74 non-overlapping benefits, arranged under six dimensions.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
Patients with severe back deformities can greatly benefit from customized medical seating. Customized medical seating is made by taking measurements of each individual patient and making the seat as per these measurements. The current measuring systems employed by the industry are limited to use in clinics which are generally located only in major population centres. Patients living in remote areas are severely affected by this as the clinics could be far away and inaccessible for these patients. To provide service of customized medical seating requires a new measurement system which is portable so that the system could be transported to the patients in remote areas. The requirements for a new measurement system are analysed to suite the needs of Equipment Technology Services of the Cerebral Palsy League of Queensland. Design for a new measurement system was conceptualised by reviewing systems and technologies in various scientific disciplines. Design for a new system was finalised by optimizing each individual component. The final approach was validated by measuring difficult models and repeating the process to check for process variances. This system has now been adopted for clinical evaluation by ETS Suggestions have been made for further improvements in this new measurement approach.
Resumo:
We describe the design and evaluation of a platform for networks of cameras in low-bandwidth, low-power sensor networks. In our work to date we have investigated two different DSP hardware/software platforms for undertaking the tasks of compression and object detection and tracking. We compare the relative merits of each of the hardware and software platforms in terms of both performance and energy consumption. Finally we discuss what we believe are the ongoing research questions for image processing in WSNs.
Resumo:
Globality generates increasingly diffuse networks of human and non-human innovators, carriers and icons of exotic, polyethnic cosmopolitan difference; and this diffusion is increasingly hard to ignore or police (Latour 1993). In fact, such global networks of material-symbolic exchange can frequently have the unintended consequence of promoting status systems and cultural relationships founded on uncosmopolitan values such as cultural appropriation and status-based social exclusion. Moreover, this materialsymbolic engagement with cosmopolitan difference could also be rather mundane, engaged in routinely without any great reflexive consciousness or capacity to destabilise current relations of cultural power, or interpreted unproblematically as just one component of a person’s social environment. Indeed, Beck’s (2006) argument is that cosmopolitanism, in an age of global risk, is being forced upon us unwillingly, so there should be no surprise if it is a bitter pill for some to swallow. Within these emergent cosmopolitan networks, which we call ‘cosmoscapes’, there is no certainty about the development of ethical or behavioural stances consistent with claims foundational to the current literature on cosmopolitanism. Reviewing historical and contemporary studies of globality and its dynamic generative capacity, this paper considers such literatures in the context of studies of cultural consumption and social status. When one positions these diverse bodies of literature against one another, it becomes clear that the possibility of widespread cosmopolitan cultural formations is largely unpromising.
Resumo:
Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.