452 resultados para Robots -- Computer programming
Resumo:
United States copyright law -- two streams of computer copyright cases form basis for 'look and feel' litigation, literary work stream and audiovisual work stream -- literary work stream focuses on structure -- audiovisual work steam addresses appearance -- case studies
Resumo:
Purpose The purpose of this study was to evaluate the validity of the CSA activity monitor as a measure of children's physical activity using energy expenditure (EE) as a criterion measure. Methods Thirty subjects aged 10 to 14 performed three 5-min treadmill bouts at 3, 4, and 6 mph, respectively. While on the treadmill, subjects wore CSA (WAM 7164) activity monitors on the right and left hips. (V) over dot O-2 was monitored continuously by an automated system. EE was determined by multiplying the average (V) over dot O-2 by the caloric equivalent of the mean respiratory exchange ratio. Results Repeated measures ANOVA indicated that both CSA monitors were sensitive to changes in treadmill speed. Mean activity counts from each CSA unit were not significantly different and the intraclass reliability coefficient for the two CSA units across all speeds was 0.87. Activity counts from both CSA units were strongly correlated with EE (r = 0.86 and 0.87, P < 0.001). An EE prediction equation was developed from 20 randomly selected subjects and cross-validated on the remaining 10. The equation predicted mean EE within 0.01 kcal.min(-1). The correlation between actual and predicted values was 0.93 (P < 0.01) and the SEE was 0.93 kcal.min(-1). Conclusion These data indicate that the CSA monitor is a valid and reliable tool for quantifying treadmill walking and running in children.
Resumo:
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Resumo:
This research proposes the development of interfaces to support collaborative, community-driven inquiry into data, which we refer to as Participatory Data Analytics. Since the investigation is led by local communities, it is not possible to anticipate which data will be relevant and what questions are going to be asked. Therefore, users have to be able to construct and tailor visualisations to their own needs. The poster presents early work towards defining a suitable compositional model, which will allow users to mix, match, and manipulate data sets to obtain visual representations with little-to-no programming knowledge. Following a user-centred design process, we are subsequently planning to identify appropriate interaction techniques and metaphors for generating such visual specifications on wall-sized, multi-touch displays.
Resumo:
In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.
Resumo:
The term “Human error” can simply be defined as an error which made by a human. In fact, Human error is an explanation of malfunctions, unintended consequents from operating a system. There are many factors that cause a person to have an error due to the unwanted error of human. The aim of this paper is to investigate the relationship of human error as one of the factors to computer related abuses. The paper beings by computer-relating to human errors and followed by mechanism mitigate these errors through social and technical perspectives. We present the 25 techniques of computer crime prevention, as a heuristic device that assists. A last section discussing the ways of improving the adoption of security, and conclusion.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.
Resumo:
Outdoor robots such as planetary rovers must be able to navigate safely and reliably in order to successfully perform missions in remote or hostile environments. Mobility prediction is critical to achieving this goal due to the inherent control uncertainty faced by robots traversing natural terrain. We propose a novel algorithm for stochastic mobility prediction based on multi-output Gaussian process regression. Our algorithm considers the correlation between heading and distance uncertainty and provides a predictive model that can easily be exploited by motion planning algorithms. We evaluate our method experimentally and report results from over 30 trials in a Mars-analogue environment that demonstrate the effectiveness of our method and illustrate the importance of mobility prediction in navigating challenging terrain.
Resumo:
Computer modelling has been used extensively in some processes in the sugar industry to achieve significant gains. This paper reviews the investigations carried out over approximately the last twenty five years,including the successes but also areas where problems and delays have been encountered. In that time the capability of both hardware and software have increased dramatically. For some processes such as cane cleaning, cane billet preparation, and sugar drying, the application of computer modelling towards improved equipment design and operation has been quite limited. A particular problem has been the large number of particles and particle interactions in these applications, which, if modelled individually, is computationally very intensive. Despite the problems, some attempts have already been made and knowledge gained on tackling these issues. Even if the detailed modelling is wanting, a model can provide some useful insights into the processes. Some options to attack these more intensive problems include the use of commercial software packages, which are usually very robust and allow the addition of user-supplied subroutines to adapt the software to particular problems. Suppliers of such software usually charge a fee per CPU licence, which is often problematic for large problems that require the use of many CPUs. Another option to consider is using open source software that has been developed with the capability to access large parallel resources. Such software has the added advantage of access to the full internal coding. This paper identifies and discusses the detail of software options with the potential capability to achieve improvements in the sugar industry.
Resumo:
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.