924 resultados para Research Subject Categories::TECHNOLOGY::Civil engineering and architecture::Other civil engineering and architecture
Resumo:
Hay una amplia tradición de investigación que nos dice que las y los docentes desarrollan saberes propios, apropiados para enseñar. Esa tradición no ha tenido en cuenta la diferencia de ser maestra y de ser maestro como una fuente de sentido, como algo significativo. Eso es lo que hace la investigación –tanto de académicas como de maestras– generada en el ámbito de la pedagogía de la diferencia sexual. Una investigación que nombra y da valor a la experiencia femenina en la escuela, en la educación, capaz de crear un conocimiento teórico no abstracto, no desvinculado de la experiencia, que puede y debe ser el referente fundamental tanto para la propia práctica como para la formación de las futuras maestras, de los futuros maestros. La presencia de las mujeres en la escuela es una riqueza. Reconocer y nombrar sus saberes es necesario y urgente, porque son reales y porque prestan atención a dimensiones fundamentales para el desarrollo de la vida de cada criatura, y para la vida social. Saberes que priorizan lo vivo (en vez de lo abstracto), la relación (y no la competitividad y el enfrentamiento), el amor (frente a la indiferencia), la política primera (la palabra antes que la norma), la relación sin fin (en lugar de la relación instrumental).
Resumo:
Aplicação de promoção de estilos de vida saudáveis e prevenção do cancro https://play.google.com/store/apps/details?id=com.ipatimup.happy
Resumo:
This report provides an overview of the results of a collaborative research project titled "A model for research supervision of international students in engineering and information technology disciplines". This project aimed to identify factors influencing the success of culturally and linguistically diverse (CALD) higher degree research (HDR) students in the fields of Engineering and Information Technology at three Australian Universities: Queensland University of Technology, The University of Western Australia and Curtin University.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based <Subject, Predicate, Object> triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web.
Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag).
Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based <Subject, Predicate, Object> triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work.
All these goals, aims and objectives could be re-stated more clearly as follows:
Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation.
Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the
Resumo:
Shipping list no.: 88-100-P.
Resumo:
Automation technology can provide construction firms with a number of competitive advantages. Technology strategy guides a firm's approach to all technology, including automation. Engineering management educators, researchers, and construction industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance. A more formal approach to the concept of technology strategy can benefit the construction manager in his efforts to remain competitive in increasingly hostile markets. This paper recommends consideration of five specific dimensions of technology strategy within the overall parameters of market conditions, firm capabilities and goals, and stage of technology evolution. Examples of the application of this framework in the formulation of technology strategy are provided for CAD applications, co-ordinated positioning technology and advanced falsework and formwork mechanisation to support construction field operations. Results from this continuing line of research can assist managers in making complex and difficult decisions regarding reengineering construction processes in using new construction technology and benefit future researchers by providing new tools for analysis. Through managing technology to best suit the existing capabilities of their firm, and addressing the market forces, engineering managers can better face the increasingly competitive environment in which they operate.
Resumo:
Pragmatic construction professionals, accustomed to intense price competition and focused on the bottom line, have difficulty justifying investments in advanced technology. Researchers and industry professionals need improved tools to analyze how technology affects the performance of the firm. This paper reports the results of research to begin answering the question, “does technology matter?” The researchers developed a set of five dimensions for technology strategy, collected information regarding these dimensions along with four measures of competitive performance in five bridge construction firms, and analyzed the information to identify relationships between technology strategy and competitive performance. Three technology strategy dimensions—competitive positioning, depth of technology strategy, and organizational fit—showed particularly strong correlations with the competitive performance indicators of absolute growth in contract awards and contract award value per technical employee. These findings indicate that technology does matter. The research also provides ways to analyze options for approaching technology and ways to relate technology to competitive performance for use by managers. It also provides a valuable set of research measures for technology strategy.
Resumo:
The indecision surrounding the definition of Technology extends to the classroom as not knowing what a subject “is” affects how it is taught. Similarly, its relative newness – and consequent lack of habitus in school settings - means that it is still struggling to find its own place in the curriculum as well as resolve its relationship with more established subject domains, particularly Science and Mathematics. The guidance from syllabus documents points to open-ended student-directed projects where extant studies indicate a more common experience of teacher –directed activities and an emphasis on product over process. There are issues too for researchers in documenting classroom observations and in analysing teacher practice in new learning environments. This paper presents a framework for defining and mapping classroom practice and for attempting to describe the social practice in the Technology classroom. The framework is a bricolage which draws on contemporary research. More formally, the development of the framework is consonant with the aim of design-based research to develop a flexible, adaptive and generalisable theory to better understanding a teaching domain where promise is not seen to match current reality. The framework may also inform emergent approaches to STEM (Science, Technology, Education and Mathematics) in education.
Resumo:
This is the project report of a leadership project undertaken jointly by the Queensland University of Technology, University of Technology Sydney, and Monash University. Specific project objectives were to: -To build leadership capacity in teaching and learning, and to improve teaching quality in ICT and Engineering disciplines at three leading Australian universities, and -To facilitate the transference of research leadership to T&L leadership, and disseminate this transference model developed through the project within the Engineering and ICT domains to other disciplines and universities.
Resumo:
This study questions how the categories of security, education and literacy were brought together as related elements of a whole-of-government strategy in the production of civil society. Drawing on an analysis of key political texts, the study argues that the categories of education and literacy have been used in diverse ways in the production of national, social, economic and geopolitical security interests. As dialogue about security has intensified, rationalisations about the national interest have engaged notions of security leading to the legitimation of a diverse set of policy instruments, strategically used to contain the rise of complex social forces and protect homogenous cultural values.
Resumo:
The State Key Laboratory of Computer Science (SKLCS) is committed to basic research in computer science and software engineering. The research topics of the laboratory include: concurrency theory, theory and algorithms for real-time systems, formal specifications based on context-free grammars, semantics of programming languages, model checking, automated reasoning, logic programming, software testing, software process improvement, middleware technology, parallel algorithms and parallel software, computer graphics and human-computer interaction. This paper describes these topics in some detail and summarizes some results obtained in recent years.