525 resultados para Technology-based incubators
Resumo:
Web services are software components designed to support interoperable machine-to-machine interactions over a network, through the exchange of SOAP messages. Since the underlying technology is independent of any specific programming language, Web Services can be effectively used to interconnect business processes across different organizations. However, a standard way of representing such interconnections has not yet emerged and is the subject of an ongoing debate.
Resumo:
In this paper, we presented an automatic system for precise urban road model reconstruction based on aerial images with high spatial resolution. The proposed approach consists of two steps: i) road surface detection and ii) road pavement marking extraction. In the first step, support vector machine (SVM) was utilized to classify the images into two categories: road and non-road. In the second step, road lane markings are further extracted on the generated road surface based on 2D Gabor filters. The experiments using several pan-sharpened aerial images of Brisbane, Queensland have validated the proposed method.
Resumo:
As more and more information is available on the Web finding quality and reliable information is becoming harder. To help solve this problem, Web search models need to incorporate users’ cognitive styles. This paper reports the preliminary results from a user study exploring the relationships between Web users’ searching behavior and their cognitive style. The data was collected using a questionnaire, Web search logs and think-aloud strategy. The preliminary findings reveal a number of cognitive factors, such as information searching processes, results evaluations and cognitive style, having an influence on users’ Web searching behavior. Among these factors, the cognitive style of the user was observed to have a greater impact. Based on the key findings, a conceptual model of Web searching and cognitive styles is presented.
Resumo:
xpanding human chondrocytes in vitro while maintaining their ability to form cartilage remains a key challenge in cartilage tissue engineering. One promising approach to address this is to use microcarriers as substrates for chondrocyte expansion. While microcarriers have shown beneficial effects for expansion of animal and ectopic human chondrocytes, their utility has not been determined for freshly isolated adult human articular chondrocytes. Thus, we investigated the proliferation and subsequent chondrogenic differentiation of these clinically relevant cells on porous gelatin microcarriers and compared them to those expanded using traditional monolayers. Chondrocytes attached to microcarriers within 2 days and remained viable over 4 weeks of culture in spinner flasks. Cells on microcarriers exhibited a spread morphology and initially proliferated faster than cells in monolayer culture, however, with prolonged expansion they were less proliferative. Cells expanded for 1 month and enzymatically released from microcarriers formed cartilaginous tissue in micromass pellet cultures, which was similar to tissue formed by monolayer-expanded cells. Cells left attached to microcarriers did not exhibit chondrogenic capacity. Culture conditions, such as microcarrier material, oxygen tension, and mechanical stimulation require further investigation to facilitate the efficient expansion of clinically relevant human articular chondrocytes that maintain chondrogenic potential for cartilage regeneration applications.
Resumo:
The use of appropriate features to characterize an output class or object is critical for all classification problems. This paper evaluates the capability of several spectral and texture features for object-based vegetation classification at the species level using airborne high resolution multispectral imagery. Image-objects as the basic classification unit were generated through image segmentation. Statistical moments extracted from original spectral bands and vegetation index image are used as feature descriptors for image objects (i.e. tree crowns). Several state-of-art texture descriptors such as Gray-Level Co-Occurrence Matrix (GLCM), Local Binary Patterns (LBP) and its extensions are also extracted for comparison purpose. Support Vector Machine (SVM) is employed for classification in the object-feature space. The experimental results showed that incorporating spectral vegetation indices can improve the classification accuracy and obtained better results than in original spectral bands, and using moments of Ratio Vegetation Index obtained the highest average classification accuracy in our experiment. The experiments also indicate that the spectral moment features also outperform or can at least compare with the state-of-art texture descriptors in terms of classification accuracy.
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.
Resumo:
Much debate in media and communication studies is based on exaggerated opposition between the digital sublime and the digital abject: overly enthusiastic optimism versus determined pessimism over the potential of new technologies. This inhibits the discipline's claims to provide rigorous insight into industry and social change which is, after all, continuous. Instead of having to decide one way or the other, we need to ask how we study the process of change.This article examines the impact of online distribution in the film industry, particularly addressing the question of rates of change. Are there genuinely new players disrupting the established oligopoly, and if so with what effect? Is there evidence of disruption to, and innovation in, business models? Has cultural change been forced on the incumbents? Outside mainstream Hollywood, where are the new opportunities and the new players? What is the situation in Australia?
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
In recent years several scientific Workflow Management Systems (WfMSs) have been developed with the aim to automate large scale scientific experiments. As yet, many offerings have been developed, but none of them has been promoted as an accepted standard. In this paper we propose a pattern-based evaluation of three among the most widely used scientific WfMSs: Kepler, Taverna and Triana. The aim is to compare them with traditional business WfMSs, emphasizing the strengths and deficiencies of both systems. Moreover, a set of new patterns is defined from the analysis of the three considered systems.
Resumo:
The NIR spectra of reichenbachite, scholzite and parascholzite have been studied at 298 K. The spectra of the minerals are different, in line with composition and crystal structural variations. Cation substitution effects are significant in their electronic spectra and three distinctly different electronic transition bands are observed in the near-infrared spectra at high wavenumbers in the 12000-7600 cm-1 spectral region. Reichenbachite electronic spectrum is characterised by Cu(II) transition bands at 9755 and 7520 cm-1. A broad spectral feature observed for ferrous ion in the 12000-9000 cm-1 region both in scholzite and parascholzite. Some what similarities in the vibrational spectra of the three phosphate minerals are observed particularly in the OH stretching region. The observation of strong band at 5090 cm-1 indicates strong hydrogen bonding in the structure of the dimorphs, scholzite and parascholzite. The three phosphates exhibit overlapping bands in the 4800-4000 cm-1 region resulting from the combinations of vibrational modes of (PO4)3- units.
Resumo:
Purpose: In the global knowledge economy, investment in knowledge-intensive industries and information and communication technology (ICT) infrastructures are seen as significant factors in improving the overall socio-economic fabric of cities. Consequently knowledge-based urban development (KBUD) has become a new paradigm in urban planning and development, for increasing the welfare and competitiveness of cities and regions. The paper discusses the critical connections between KBUD strategies and knowledge-intensive industries and ICT infrastructures. In particular, it investigates the application of the knowledge-based urban development concept by discussing one of South East Asia’s large scale manifestations of KBUD; Malaysia’s Multimedia Super Corridor. ----- ----- Design/methodology/approach: The paper provides a review of the KBUD concept and develops a knowledge-based urban development assessment framework to provide a clearer understanding of development and evolution of KBUD manifestations. Subsequently the paper investigates the implementation of the KBUD concept within the Malaysian context, and particularly the Multimedia Super Corridor (MSC). ----- ----- Originality/value: The paper, with its KBUD assessment framework, scrutinises Malaysia’s experince; providing an overview of the MSC project and discussion of the case findings. The development and evolution of the MSC is viewed with regard to KBUD policy implementation, infrastructural implications, and the agencies involved in the development and management of the MSC. ----- ----- Practical implications: The emergence of the knowledge economy, together with the issues of globalisation and rapid urbanisation, have created an urgent need for urban planners to explore new ways of strategising planning and development that encompasses the needs and requirements of the knowledge economy and society. In light of the literature and MSC case findings, the paper provides generic recommendations, on the orchestration of knowledge-based urban development, for other cities and regions seeking to transform to the knowledge economy.
Resumo:
In recent years, ocean scientists have started to employ many new forms of technology as integral pieces in oceanographic data collection for the study and prediction of complex and dynamic ocean phenomena. One area of technological advancement in ocean sampling if the use of Autonomous Underwater Vehicles (AUVs) as mobile sensor plat- forms. Currently, most AUV deployments execute a lawnmower- type pattern or repeated transects for surveys and sampling missions. An advantage of these missions is that the regularity of the trajectory design generally makes it easier to extract the exact path of the vehicle via post-processing. However, if the deployment region for the pattern is poorly selected, the AUV can entirely miss collecting data during an event of specific interest. Here, we consider an innovative technology toolchain to assist in determining the deployment location and executed paths for AUVs to maximize scientific information gain about dynamically evolving ocean phenomena. In particular, we provide an assessment of computed paths based on ocean model predictions designed to put AUVs in the right place at the right time to gather data related to the understanding of algal and phytoplankton blooms.
Resumo:
Increasingly, large amounts of public and private money are being invested in education and as a result, schools are becoming more accountable to stakeholders for this financial input. In terms of the curriculum, governments worldwide are frequently tying school funding to students‟ and schools‟ academic performances, which are monitored through high-stakes testing programs. To accommodate the resultant pressures from these testing initiatives, many principals are re-focussing their school‟s curriculum on the testing requirements. Such a re-focussing, which was examined critically in this thesis, constituted an externally facilitated rapid approach to curriculum change. In line with previously enacted change theories and recommendations from these, curriculum change in schools has tended to be a fairly slow, considered, collaborative process that is facilitated internally by a deputy-principal (curriculum). However, theoretically based research has shown that such a process has often proved to be difficult and very rarely successful. The present study reports and theorises the experiences of an externally facilitated process that emerged from a practitioner model of change. This case study of the development of the controlled rapid approach to curriculum change began by establishing the reasons three principals initiated curriculum change and why they then engaged an outsider to facilitate the process. It also examined this particular change process from the perspectives of the research participants. The investigation led to the revision of the practitioner model as used in the three schools and challenged the current thinking about the process of school curriculum change. The thesis aims to offer principals and the wider education community an alternative model for consideration when undertaking curriculum change. Finally, the thesis warns that, in the longer term, the application of study‟s revised model (the Controlled Rapid Approach to Curriculum Change [CRACC] Model) may have less then desirable educational consequences.
Resumo:
Efficient and effective urban management systems for Ubiquitous Eco Cities require having intelligent and integrated management mechanisms. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision-making system and necessary infrastructure and technologies. In Ubiquitous Eco Cities telecommunication technologies play an important role in monitoring and managing activities via wired and wireless networks. Particularly, technology convergence creates new ways in which information and telecommunication technologies are used and formed the backbone of urban management. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices and provides new opportunities in the management of Ubiquitous Eco Cities. This chapter discusses developments in telecommunication infrastructure and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities.