856 resultados para 091400 RESOURCES ENGINEERING AND EXTRACTIVE METALLURGY
Resumo:
In the modern aspect of powder metallurgy, the first use of a sintering process was in making filaments for incandescent electric lamps.In the short while from the day of Edison to the present, the science of working with metal powders has advanced by leaps and bounds.
Resumo:
Oil and gas have been found in the Triassic strata of Wyoming. Although the Triassic has not as yet proven to be a large producing horizon it is very probable that additional oil will be found in Triassic strata in the future, and it is one of the goals at which oil well drillers aim their tools.
Resumo:
The factors that influence the choice of a method for treatment of an ore comprise the technical and economic limitations and advantages, derived in detail and balanced according to the exigencies of the particular situation.
Resumo:
Thermal analysis VIPS used to construct cooling and heating curves from which the phase diagram was determined. The data for the entire set of cooling curves were obtained by the use of mercury thermometers.
Resumo:
Three cycles of erosion have modified the Boulder batholith. The earliest cycle produced a peneplaination that has been largely obliterated by a partially completed intermediate cycle, and the recent cycle now in progress.
Resumo:
This study examines the relationship among psychological resources (generalized resistance resources), care demands (demands for care, competing demands, perception of burden) and cognitive stress in a selected population of primary family caregivers. The study utilizes Antonovsky's Salutogenic Model of Health, specifically the concept of generalized resistance resources (GRRs), to analyze the relative effect of these resources on mediating cognitive stress, controlling for other care demands. The study is based on a sample of 784 eligible caregivers who (1) were relatives, (2) had the main responsibility for care, defined as a primary caregiver, and (3) provided a scaled stress score for the amount of overall care given to the care recipient (family member). The sample was drawn from the 1982 National Long-Term Care Survey (NLTCS) of individuals who assisted a given NLTCS sample person with ADL limitations.^ The study tests the following hypotheses: (a) There will be a negative relationship between generalized resistance resources (GRRs) and cognitive stress controlling for care demands (demands for care, competing demands, and perceptions of burden); (b) of the specific GRRs (material, cognitive, social, cultural-environmental) the social domain will represent the most significant factor predicting a decrease in cognitive stress; and (c) the social domain will be more significant for the female than the male primary family caregiver in decreasing cognitive stress.^ The study found that GRRs had a statistically significant mediating effect on cognitive stress, but the GRRs were a less significant predictor of stress than perception of burden and demands for care. Thus, although the analysis supported the underlying hypothesis, the specific hypothesis regarding GRRs' greater significance in buffering cognitive stress was not supported. Second, the results did not demonstrate the statistical significance or differences among the GRR domains. The hypothesis that the social GRR domain was most significant in mediating stress of family caregivers was not supported. Finally, the results confirmed that there are differences in the importance of social support help in mediating stress based on gender. It was found that gender and social support help were related to cognitive stress and gender had a statistically significant interaction effect with social support help. Implications for clinical practice, public health policy, and research are discussed. ^
Resumo:
Comparing published NAVD 88 Helmert orthometric heights of First-Order bench marks against GPS-determined orthometric heights showed that GEOID03 and GEOID09 perform at their reported accuracy in Connecticut. GPS-determined orthometric heights were determined by subtracting geoid undulations from ellipsoid heights obtained from a network least-squares adjustment of GPS occupations in 2007 and 2008. A total of 73 markers were occupied in these stability classes: 25 class A, 11 class B, 12 class C, 2 class D bench marks, and 23 temporary marks with transferred elevations. Adjusted ellipsoid heights were compared against OPUS as a check. We found that: the GPS-determined orthometric heights of stability class A markers and the transfers are statistically lower than their published values but just barely; stability class B, C and D markers are also statistically lower in a manner consistent with subsidence or settling; GEOID09 does not exhibit a statistically significant residual trend across Connecticut; and GEOID09 out-performed GEOID03. A "correction surface" is not recommended in spite of the geoid models being statistically different than the NAVD 88 heights because the uncertainties involved dominate the discrepancies. Instead, it is recommended that the vertical control network be re-observed.
Resumo:
Quality assessment is one of the activities performed as part of systematic literature reviews. It is commonly accepted that a good quality experiment is bias free. Bias is considered to be related to internal validity (e.g., how adequately the experiment is planned, executed and analysed). Quality assessment is usually conducted using checklists and quality scales. It has not yet been proven;however, that quality is related to experimental bias. Aim: Identify whether there is a relationship between internal validity and bias in software engineering experiments. Method: We built a quality scale to determine the quality of the studies, which we applied to 28 experiments included in two systematic literature reviews. We proposed an objective indicator of experimental bias, which we applied to the same 28 experiments. Finally, we analysed the correlations between the quality scores and the proposed measure of bias. Results: We failed to find a relationship between the global quality score (resulting from the quality scale) and bias; however, we did identify interesting correlations between bias and some particular aspects of internal validity measured by the instrument. Conclusions: There is an empirically provable relationship between internal validity and bias. It is feasible to apply quality assessment in systematic literature reviews, subject to limits on the internal validity aspects for consideration.
Resumo:
In this work, a comparison between the competences codes in the CDIÓs* curriculum, the ones defined for the Tunning Project and the International Project Management Association (IPMA) is made. The goal is to define the most appropriate competences codes for the engineering education in Latin America. The CDIO code is obtained from the engineering practice, and responds to the Accreditation Board for Engineering and Technology (ABET) standards of accreditation. The Tuning competences are the ones defined for Latin America and the IPMÁs are international competences for project management. It is the first time that the competences defined in ABET accreditation standards in the engineering field are compared with the international competences according to IPMÁs model. The results give evidence that, in first place, there is a need to apply holistic models in the definition of an engineering curriculum. Second, the pertinence of these models in the definition of engineering programs in Latin America.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
This work indicates the importance of the Final Year Project (FYP) in the strengthening of competences of engineering students. The study also shows which personal competences of students are reinforced most during the FYP process,including the preparation, elaboration, presentation and defence stages. In order to gather information on this subject, a survey was conducted at two different Spanish technical universities—one public and one private—and a comparative analysis was performed of the questionnaires collected. The competence model considered is that used by the Accreditation Board for Engineering and Technology (ABET), since the official title of the public university has been accredited by this model. The results indicate which personal and professional competences of students are reinforced well by undertaking the FYP. Any significant differences in response by university are explained in the study. For validation purposes, the results were contrasted with the instructor’s perspective using the triangulation methodology. Finally, the conclusions drawn will permit the design of new study plans to cope more effectively with the challenges of the FYP in the new Bologna framework.
Resumo:
A new set of manufacturing technologies has emerged in the past decades to address market requirements in a customized way and to provide support for research tasks that require prototypes. These new techniques and technologies are usually referred to as rapid prototyping and manufacturing technologies, and they allow prototypes to be produced in a wide range of materials with remarkable precision in a couple of hours. Although they have been rapidly incorporated into product development methodologies, they are still under development, and their applications in bioengineering are continuously evolving. Rapid prototyping and manufacturing technologies can be of assistance in every stage of the development process of novel biodevices, to address various problems that can arise in the devices' interactions with biological systems and the fact that the design decisions must be tested carefully. This review focuses on the main fields of application for rapid prototyping in biomedical engineering and health sciences, as well as on the most remarkable challenges and research trends.
Resumo:
There is remarkable growing concern about the quality control at the time, which has led to the search for methods capable of addressing effectively the reliability analysis as part of the Statistic. Managers, researchers and Engineers must understand that 'statistical thinking' is not just a set of statistical tools. They should start considering 'statistical thinking' from a 'system', which means, developing systems that meet specific statistical tools and other methodologies for an activity. The aim of this article is to encourage them (engineers, researchers and managers) to develop a new way of thinking.
Resumo:
The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.