996 resultados para STEP-NC format
Resumo:
This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.
Resumo:
This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.
Resumo:
Aim: The koala is a widely distributed Australian marsupial with regional populations that are in rapid decline, are stable or have increased in size. This study examined whether it is possible to use expert elicitation to estimate abundance and trends of populations of this species. Diverse opinions exist about estimates of abundance and, consequently, the status of populations. Location: Eastern and south-eastern Australia Methods: Using a structured, four-step question format, a panel of 15 experts estimated population sizes of koalas and changes in those sizes for bioregions within four states. They provided their lowest plausible estimate, highest plausible estimate, best estimate and their degree of confidence that the true values were contained within these upper and lower estimates. We derived estimates of the mean population size of koalas and associated uncertainties for each bioregion and state. Results: On the basis of estimates of mean population sizes for each bioregion and state, we estimated that the total number of koalas for Australia is 329,000 (range 144,000-605,000) with an estimated average decline of 24% over the past three generations and the next three generations. Estimated percentage of loss in Queensland, New South Wales, Victoria and South Australia was 53%, 26%, 14% and 3%, respectively. Main conclusions: It was not necessary to achieve high levels of certainty or consensus among experts before making informed estimates. A quantitative, scientific method for deriving estimates of koala populations and trends was possible, in the absence of empirical data on abundances.
Resumo:
In Step was a wearable artwork consisting of a pair of embroidered foot bandages and an actuator ‘cushion’ embedded with 15 electromechanical actuator pistons. The bandage was embedded with woven, soft and flexible fabric sensors - interconnected with metallic connecting threads, fasteners and a wireless interface (in a final form). When wrapped around a foot and lower leg the sensors sat on the ball of the toes and heel. This ‘wearable interface’ was then connected wirelessly to a soft sculptural form, which employed actuators to tap gently in response to the qualities of the walk detected by the soft sensors. In this way the ‘tread qualities’ of the walker could then be felt by someone else holding this device against their stomach – thereby allowing pairs of participants to ‘feel’ the tactile qualities of the other's walk. The work was presented both as a working object and via a short videorecorded performance.----- In Step generated innovative new approaches to interface and sensor embedded clothing/footware whilst also creating an evocative vehicle to comment upon contemporary Post Colonial theories of weight and groundedness – particularly the psycho-geographical ‘separation’ from the landscape that inspired Paul Carter’s “environmentally grounded poetics”. The work’s final form also suggested critical new directions for responsive clothing and footwear for the emerging genre of smart textiles.
Resumo:
Lead germanate-graphene nanosheets (PbGeO3-GNS) composites have been prepared by an efficient one-step, in-situ hydrothermal method and were used as anode materials for Li-ion batteries (LIBs). The PbGeO3 nanowires, around 100–200 nm in diameter, are highly encapsulated in a graphene matrix. The lithiation and de-lithiation reaction mechanisms of the PbGeO3 anode during the charge-discharge processes have been investigated by X-ray diffraction and electrochemical characterization. Compared with pure PbGeO3 anode, dramatic improvements in the electrochemical performance of the composite anodes have been obtained. In the voltage window of 0.01–1.50 V, the composite anode with 20 wt.% GNS delivers a discharge capacity of 607 mAh g−1 at 100 mA g−1 after 50 cycles. Even at a high current density of 1600 mA g−1, a capacity of 406 mAh g−1 can be achieved. Therefore, the PbGeO3-GNS composite can be considered as a potential anode material for lithium ion batteries.
Resumo:
Hydrogenated nanocrystalline silicon (nc-Si:H) layers of boron-doped increasing step by step was deposited on n-type crystalline silicon substrate using Plasma Enhanced Chemical Vapor Deposition (PECVD) system. After evaporating Ohm contact electrode on the side of substrate and on the side of nc-Si:H film, a structure of electrode/ (p)nc-Si:H/(n)c-Si/electrode was obtained. It is confirmed by electrical measurement such as I-V curve, C-V curve and DLTS that this is a variable capacitance diode. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The purpose of this project is the creation of a graphical "programming" interface for a sensor network tasking language called STEP. The graphical interface allows the user to specify a program execution graphically from an extensible pallet of functionalities and save the results as a properly formatted STEP file. Moreover, the software is able to load a file in STEP format and convert it into the corresponding graphical representation. During both phases a type-checker is running on the background to ensure that both the graphical representation and the STEP file are syntactically correct. This project has been motivated by the Sensorium project at Boston University. In this technical report we present the basic features of the software, the process that has been followed during the design and implementation. Finally, we describe the approach used to test and validate our software.
Resumo:
In this thesis a novel transmission format, named Coherent Wavelength Division Multiplexing (CoWDM) for use in high information spectral density optical communication networks is proposed and studied. In chapter I a historical view of fibre optic communication systems as well as an overview of state of the art technology is presented to provide an introduction to the subject area. We see that, in general the aim of modern optical communication system designers is to provide high bandwidth services while reducing the overall cost per transmitted bit of information. In the remainder of the thesis a range of investigations, both of a theoretical and experimental nature are carried out using the CoWDM transmission format. These investigations are designed to consider features of CoWDM such as its dispersion tolerance, compatibility with forward error correction and suitability for use in currently installed long haul networks amongst others. A high bit rate optical test bed constructed at the Tyndall National Institute facilitated most of the experimental work outlined in this thesis and a collaboration with France Telecom enabled long haul transmission experiments using the CoWDM format to be carried out. An amount of research was also carried out on ancillary topics such as optical comb generation, forward error correction and phase stabilisation techniques. The aim of these investigations is to verify the suitability of CoWDM as a cost effective solution for use in both current and future high bit rate optical communication networks
Resumo:
Step bunching develops in the epitaxy of SrRuO3 on vicinal SrTiO3(001) substrates. We have investigated the formation mechanisms and we show here that step bunching forms by lateral coalescence of wedgelike three-dimensional islands that are nucleated at substrate steps. After coalescence, wedgelike islands become wider and straighter with growth, forming a self-organized network of parallel step bunches with altitudes exceeding 30 unit cells, separated by atomically flat terraces. The formation mechanism of step bunching in SrRuO3, from nucleated islands, radically differs from one-dimensional models used to describe bunching in semiconducting materials. These results illustrate that growth phenomena of complex oxides can be dramatically different to those in semiconducting or metallic systems.
Resumo:
Due to the edition of the new Atles comarcal de Catalunya by the Institut Cartogràfic de Catalunya in digital format, we look at the structure of their contents and the dynamics of its use. Likewise, it is detailed which methodology was utilized to perform it. Finally, some reflections on the innovations that this new product entails in textual and cartographic areas are proposed, especially related to the current socioeconomic context and new employed technologies
Resumo:
Objectives: Methods for converting inactive video gaming to active video gaming have gained popularity in recent years. This study compared the physiological cost of a new peripheral device that used steps to power video gaming in an interactive manner against sedentary video gaming and self-paced ambulatory activity of university students (aged 19-29 years).
Methods: Nineteen adults (9 male, 10 female) performed six 10-minute activities, namely self-paced leisurely walking, self-paced brisk walking, self-paced jogging, two forms of sedentary video gaming, and step-powered video gaming. Activities were performed in a random order. Physiological cost during the activities was measured using Actiheart.
Results: Energy expenditure during step-powered video gaming (388.8 kcal.h-1) was comparable to the energy expended during brisk walking (373.8 kcal.h-1), and elicited a higher energy cost than sedentary video gaming (124.1 kcal.h-1) but a lower energy cost than jogging (694.5 kcal.h-1).
Conclusion: Overall, step-powered video gaming could be used as an entertaining and appealing tool to increase physical activity, though it should not be used as a complete substitute for traditional exercise, such as jogging.
Resumo:
The dataset described in this document has been put together for the purposes of numerical ice sheet modelling of the Antarctic Ice Sheet (AIS), containing data on the ice sheet configuration (e.g. ice surface and ice thickness) and boundary conditions, such as the surface air temperature and accumulation. It is now possible to download a community ice sheet model (e.g. Glimmer-CISM, Rutt et al., 2009 doi:10.1029/2008JF001015), but without adequate data it is difficult to utilise such models. More specifically, ice sheet models that are initialised and run forward from the present day ice sheet configuration, need input data to represent the present-day ice sheet configuration as closely as possible (unlike those spun-up from ice free conditions, which only require the bed/bathymetry). Whilst the BEDMAP dataset (Lythe et al., 2001) was a step forward when it was made, there are a number of inconsistencies within the dataset (see Section 3), and since its release, more data has become available. The dataset described here incorporates some major new datasets (e.g. AGASEA/BBAS ice thickness, Nitsche et al. (2006) bathymetry doi:10.1029/2007GC001694), but by no means incorporates all the new data available. This considerable task is left for a 'BEDMAP2', (an updated version of BEDMAP), however, the processing carried out in this document illustrates the requirements of a dataset for the purpose of high resolution ice sheet modelling, and bridges the gap until a BEDMAP2 is published. It is envisaged, however, that updated versions of the data set will be made available periodically when new regional data sets become available and can be readily incorporated.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based