136 resultados para 100602 Input Output and Data Devices
Resumo:
Although there are many potential new insights to be gained through advancing research on the clients of male sex workers, significant social, ethical and methodological challenges to accessing this population exist. This research project case explores our attempts to recruit a population that does not typically form a cohesive or coherent 'community' and often avoids self-identifying to mitigate the stigma attached to buying sex. We used an arms-length recruitment campaign that focussed on directing potential participants to our study website, which could in turn lead them to participate in an anonymous telephone interview. Barriers to reaching male sex-work clients, however, demanded the evolution of our recruitment strategy. New technologies are part of the solution to accessing a hard-to-reach population, but they only work if researchers engage responsively. We also show how we conducted an in-depth interview with a client and discuss the value of using secondary data.
Resumo:
Purpose: Skin temperature assessment has historically been undertaken with conductive devices affixed to the skin. With the development of technology, infrared devices are increasingly utilised in the measurement of skin temperature. Therefore, our purpose was to evaluate the agreement between four skin temperature devices at rest, during exercise in the heat, and recovery. Methods: Mean skin temperature (T̅sk) was assessed in thirty healthy males during 30 min rest (24.0± 1.2°C, 56 ± 8%), 30 min cycle in the heat (38.0 ± 0.5°C, 41 ± 2%), and 45 min recovery(24.0 ± 1.3°C, 56 ± 9%). T̅sk was assessed at four sites using two conductive devices(thermistors, iButtons) and two infrared devices (infrared thermometer, infrared camera). Results: Bland–Altman plots demonstrated mean bias ± limits of agreement between the thermistors and iButtons as follows (rest, exercise, recovery): -0.01 ± 0.04, 0.26 ± 0.85, -0.37 ± 0.98°C; thermistors and infrared thermometer: 0.34 ± 0.44, -0.44 ± 1.23, -1.04 ± 1.75°C; thermistors and infrared camera (rest, recovery): 0.83 ± 0.77, 1.88 ± 1.87°C. Pairwise comparisons of T̅sk found significant differences (p < 0.05) between thermistors and both infrared devices during resting conditions, and significant differences between the thermistors and all other devices tested during exercise in the heat and recovery. Conclusions: These results indicate poor agreement between conductive and infrared devices at rest, during exercise in the heat, and subsequent recovery. Infrared devices may not be suitable for monitoring T̅sk in the presence of, or following, metabolic and environmental induced heat stress.
Resumo:
Developing and maintaining a successful institutional repository for research publications requires a considerable investment by the institution. Most of the money is spent on developing the skill-sets of existing staff or hiring new staff with the necessary skills. The return on this investment can be magnified by using this valuable infrastructure to curate collections of other materials such as learning objects, student work, conference proceedings and institutional or local community heritage materials. When Queensland University of Technology (QUT) implemented its repository for research publications (QUT ePrints) over 11 years ago, it was one of the first institutional repositories to be established in Australia. Currently, the repository holds over 29,000 open access research publications and the cumulative total number of full-text downloads for these document now exceeds 16 million. The full-text deposit rate for recently-published peer reviewed papers (currently over 74%) shows how well the repository has been embraced by QUT researchers. The success of QUT ePrints has resulted in requests to accommodate a plethora of materials which are ‘out of scope’ for this repository. QUT Library saw this as an opportunity to use its repository infrastructure (software, technical know-how and policies) to develop and implement a metadata repository for its research datasets (QUT Research Data Finder), a repository for research-related software (QUT Software Finder) and to curate a number of digital collections of institutional and local community heritage materials (QUT Digital Collections). This poster describes the repositories and digital collections curated by QUT Library and outlines the value delivered to the institution, and the wider community, by these initiatives.
Resumo:
This article examines a series of controversies within the life sciences over data sharing. Part 1 focuses upon the agricultural biotechnology firm Syngenta publishing data on the rice genome in the journal Science, and considers proposals to reform scientific publishing and funding to encourage data sharing. Part 2 examines the relationship between intellectual property rights and scientific publishing, in particular copyright protection of databases, and evaluates the declaration of the Human Genome Organisation that genomic databases should be global public goods. Part 3 looks at varying opinions on the information function of patent law, and then considers the proposals of Patrinos and Drell to provide incentives for private corporations to release data into the public domain.
Resumo:
In this paper, the results of the time dispersion parameters obtained from a set of channel measurements conducted in various environments that are typical of multiuser Infostation application scenarios are presented. The measurement procedure takes into account the practical scenarios typical of the positions and movements of the users in the particular Infostation network. To provide one with the knowledge of how much data can be downloaded by users over a given time and mobile speed, data transfer analysis for multiband orthogonal frequency division multiplexing (MB-OFDM) is presented. As expected, the rough estimate of simultaneous data transfer in a multiuser Infostation scenario indicates dependency of the percentage of download on the data size, number and speed of the users, and the elapse time.
Resumo:
Developing innovative library services requires a real world understanding of faculty members' desired curricular goals. This study aimed to develop a comprehensive and deeper understanding of Purdue's nutrition science and political science faculties' expectations for student learning related to information and data information literacies. Course syllabi were examined using grounded theory techniques that allowed us to identify how faculty were addressing information and data information literacies in their courses, but it also enabled us to understand the interconnectedness of these literacies to other departmental intentions for student learning, such as developing a professional identity or learning to conduct original research. The holistic understanding developed through this research provides the necessary information for designing and suggesting information literacy and data information literacy services to departmental faculty in ways supportive of curricular learning outcomes.
Resumo:
Sensor networks for environmental monitoring present enormous benefits to the community and society as a whole. Currently there is a need for low cost, compact, solar powered sensors suitable for deployment in rural areas. The purpose of this research is to develop both a ground based wireless sensor network and data collection using unmanned aerial vehicles. The ground based sensor system is capable of measuring environmental data such as temperature or air quality using cost effective low power sensors. The sensor will be configured such that its data is stored on an ATMega16 microcontroller which will have the capability of communicating with a UAV flying overhead using UAV communication protocols. The data is then either sent to the ground in real time or stored on the UAV using a microcontroller until it lands or is close enough to enable the transmission of data to the ground station.
Resumo:
This paper describes Electronic Blocks, a new robot construction element designed to allow children as young as age three to build and program robotic structures. The Electronic Blocks encapsulate input, output and logic concepts in tangible elements that young children can use to create a wide variety of physical agents. The children are able to determine the behavior of these agents by the choice of blocks and the manner in which they are connected. The Electronic Blocks allow children without any knowledge of mechanical design or computer programming to create and control physically embodied robots. They facilitate the development of technological capability by enabling children to design, construct, explore and evaluate dynamic robotics systems. A study of four and five year-old children using the Electronic Blocks has demonstrated that the interface is well suited to young children. The complexity of the implementation is hidden from the children, leaving the children free to autonomously explore the functionality of the blocks. As a consequence, children are free to move their focus beyond the technology. Instead they are free to focus on the construction process, and to work on goals related to the creation of robotic behaviors and interactions. As a resource for robot building, the blocks have proved to be effective in encouraging children to create robot structures, allowing children to design and program robot behaviors.
Resumo:
Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
Over the past several years, there has been resurgent interest in regional planning in North America, Europe and Australasia. Spurred by issues such as metropolitan growth, transportation infrastructure, environmental management and economic development, many states and metropolitan regions are undertaking new planning initiatives. These regional efforts have also raised significant question about governance structures, accountability and measures of effectiveness.n this paper, the authors conducted an international review of ten case studies from the United States, Canada, England, Belgium, New Zealand and Australia to explore several critical questions. Using qualitative data template, the research team reviewed plans, documents, web sites and published literature to address three questions. First, what are the governance arrangements for delivering regional planning? Second, what are the mechanisms linking regional plans with state plans (when relevant) and local plans? Third, what means and mechanisms do these regional plans use to evaluate and measure effectiveness? The case study analysis revealed several common themes. First, there is an increasing focus on goverance at the regional level, which is being driven by a range of trends, including regional spatial development initiatives in Europe, regional transportation issues in the US, and the growth of metropolitan regions generally. However, there is considerable variation in how regional governance arrangements are being played out. Similarly, there is a range of processes being used at the regional level to guide planning that range from broad ranging (thick) processes to narrow and limited (thin) approaches. Finally, evaluation and monitoring of regional planning efforts are compiling data on inputs, processes, outputs and outcomes. Although there is increased attention being paid to indicators and monitoring, most of it falls into outcome evaluations such as Agenda 21 or sustainability reporting. Based on our review we suggest there is a need for increased attention on input, process and output indicators and clearer linkages of these indicators in monitoring and evaluation frameworks. The focus on outcome indicators, such as sustainability indicators, creates feedback systems that are too long-term and remote for effective monitoring and feedback. Although we found some examples of where these kinds of monitoring frameworks are linked into a system of governance, there is a need for clearer conceptual development for both theory and practice.
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This final report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This Industry focused report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.