927 resultados para Leukocyte extract
Resumo:
Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This Industry focused report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.
Resumo:
Image annotation is a significant step towards semantic based image retrieval. Ontology is a popular approach for semantic representation and has been intensively studied for multimedia analysis. However, relations among concepts are seldom used to extract higher-level semantics. Moreover, the ontology inference is often crisp. This paper aims to enable sophisticated semantic querying of images, and thus contributes to 1) an ontology framework to contain both visual and contextual knowledge, and 2) a probabilistic inference approach to reason the high-level concepts based on different sources of information. The experiment on a natural scene database from LabelMe database shows encouraging results.
Resumo:
To date, mesenchymal stem cells (MSCs) from various tissues have been reported, but the yield and differentiation potential of different tissue-derived MSCs is still not clear. This study was undertaken in an attempt to investigate the multilineage stem cell potential of bone and cartilage explant cultures in comparison with bone marrow derived mesenchymal stem cells (BMSCs). The results showed that the surface antigen expression of tissue-derived cells was consistent with that of mesenchymal stem cells, such as lacking the haematopoietic and common leukocyte markers (CD34, CD45) while expressing markers related to adhesion (CD29, CD166) and stem cells (CD90, CD105). The tissue-derived cells were able to differentiate into osteoblast, chondrocyte and adipocyte lineage pathways when stimulated in the appropriate differentiating conditions. However, compared with BMSCs, tissue-derived cells showed less capacity for multilineage differentiation when the level of differentiation was assessed in monolayer culture by analysing the expression of tissue-specific genes by reverse transcription polymerase chain reaction (RT-PCR) and histology. In high density pellet cultures, tissue-derived cells were able to differentiate into chondrocytes, expressing chondrocyte markers such as proteoglycans, type II collagen and aggrecan. Taken together, these results indicate that cells derived from tissue explant cultures reserved certain degree of differentiation properties of MSCs in vitro.
Resumo:
Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.
Resumo:
The materials presented here are intended to: a) accompany the document Supervisor Resource and b) provide technology supervisors with materials that may be readily shared with students. These resources are not designed to be distributed to students without contextualization, they are intended for use in workshops or in discussions between supervisors and students. As authors, we anticipate that supervisors or workshop facilitators are most likely to extract individual resources of interest for particular occasions. The materials have been developed from conversations with supervisors from the technology disciplines.
Resumo:
Crash risk is the statistical probability of a crash. Its assessment can be performed through ex post statistical analysis or in real-time with on-vehicle systems. These systems can be cooperative. Cooperative Vehicle-Infrastructure Systems (CVIS) are a developing research avenue in the automotive industry worldwide. This paper provides a survey of existing CVIS systems and methods to assess crash risk with them. It describes the advantages of cooperative systems versus non-cooperative systems. A sample of cooperative crash risk assessment systems is analysed to extract vulnerabilities according to three criteria: market penetration, over-reliance on GPS and broadcasting issues. It shows that cooperative risk assessment systems are still in their infancy and requires further development to provide their full benefits to road users.
Resumo:
Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.
Resumo:
In this paper, we discuss our participation to the INEX 2008 Link-the-Wiki track. We utilized a sliding window based algorithm to extract the frequent terms and phrases. Using the extracted phrases and term as descriptive vectors, the anchors and relevant links (both incoming and outgoing) are recognized efficiently.
Resumo:
The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.
Resumo:
Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.
Resumo:
Road features extraction from remote sensed imagery has been a long-term topic of great interest within the photogrammetry and remote sensing communities for over three decades. The majority of the early work only focused on linear feature detection approaches, with restrictive assumption on image resolution and road appearance. The widely available of high resolution digital aerial images makes it possible to extract sub-road features, e.g. road pavement markings. In this paper, we will focus on the automatic extraction of road lane markings, which are required by various lane-based vehicle applications, such as, autonomous vehicle navigation, and lane departure warning. The proposed approach consists of three phases: i) road centerline extraction from low resolution image, ii) road surface detection in the original image, and iii) pavement marking extraction on the generated road surface. The proposed method was tested on the aerial imagery dataset of the Bruce Highway, Queensland, and the results demonstrate the efficiency of our approach.
Resumo:
The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.
Resumo:
Osteophytes form through the process of chondroid metamorphosis of fibrous tissue followed by endochondral ossification. Osteophytes have been found to consist of three different mesenchymal tissue regions including endochondral bone formation within cartilage residues, intra-membranous bone formation within fibrous tissue and bone formation within bone marrow spaces. All these features provide evidence of mesenchymal stem cells (MSC) involvement in osteophyte formation; nevertheless, it remains to be characterised. MSC from numerous mesenchymal tissues have been isolated but bone marrow remains the “ideal” due to the ease of ex vivo expansion and multilineage potential. However, the bone marrow stroma has a relatively low number of MSC, something that necessitates the need for long-term culture and extensive population doublings in order to obtain a sufficient number of cells for therapeutic applications. MSC in vitro have limited proliferative capacity and extensive passaging compromises differentiation potential. To overcome this barrier, tissue derived MSC are of strong interest for extensive study and characterisation, with a focus on their potential application in therapeutic tissue regeneration. To date, no MSC type cell has been isolated from osteophyte tissue, despite this tissue exhibiting all the hallmark features of a regenerative tissue. Therefore, this study aimed to isolate and characterise cells from osteophyte tissues in relation to their phenotype, differentiation potential, immuno-modulatory properties, proliferation, cellular ageing, longevity and chondrogenesis in in vitro defect model in comparison to patient matched bone marrow stromal cells (bMSC). Osteophyte derived cells were isolated from osteophyte tissue samples collected during knee replacement surgery. These cells were characterised by the expression of cell surface antigens, differentiation potential into mesenchymal lineages, growth kinetics and modulation of allo-immune responses. Multipotential stem cells were identified from all osteophyte samples namely osteophyte derived mesenchymal stem cells (oMSC). Extensively expanded cell cultures (passage 4 and 9 respectively) were used to confirm cytogenetic stability and study signs of cellular aging, telomere length and telomerase activity. Cultured cells at passage 4 were used to determine 84 pathway focused stem cell related gene expression profile. Micro mass pellets were cultured in chondrogenic differentiation media for 21 days for phenotypic and chondrogenic related gene expression. Secondly, cell pellets differentiated overnight were placed into articular cartilage defects and cultured for further 21 days in control medium and chondrogenic medium to study chondrogenesis and cell behaviour. The surface antigen expression of oMSC was consistent with that of mesenchymal stem cells, such as lacking the haematopoietic and common leukocyte markers (CD34, CD45) while expressing those related to adhesion (CD29, CD166, CD44) and stem cells (CD90, CD105, CD73). The proliferation capacity of oMSC in culture was superior to that of bMSC, and they readily differentiated into tissues of the mesenchymal lineages. oMSC also demonstrated the ability to suppress allogeneic T-cell proliferation, which was associated with the expression of tryptophan degrading enzyme indoleamine 2,3 dioxygenase (IDO). Cellular aging was more prominent in late passage bMSC than in oMSC. oMSC had longer telomere length in late passages compared with bMSC, although there was no significant difference in telomere lengths in the early passages in either cell type. Telomerase activity was detectable only in early passage oMSC and not in bMSC. In osteophyte tissues telomerase positive cells were found to be located peri vascularly and were Stro-1 positive. Eighty-four pathway-focused genes were investigated and only five genes (APC, CCND2, GJB2, NCAM and BMP2) were differentially expressed between bMSC and oMSC. Chondrogenically induced micro mass pellets of oMSC showed higher staining intensity for proteoglycans, aggrecan and collagen II. Differential expression of chondrogenic related genes showed up regulation of Aggrecan and Sox 9 in oMSC and collagen II in bMSC. The in vitro defect models of oMSC in control medium showed rounded and aggregated cells staining positively for proteoglycan and presence of some extracellular matrix. In contrast, defects with bMSC showed fragmentation and loss of cells, fibroblast-like cell morphology staining positively for proteoglycans. For defects maintained in chondrogenic medium, rounded, aggregated and proteoglycan positive cells were found in both oMSC and bMSC cultures. Extracellular matrix and cellular integration into newly formed matrix was evident only in oMSC defects. For analysis of chondrocyte hypertrophy, strong expression of type X collagen could be noticed in the pellet cultures and transplanted bMSC. In summary, this study demonstrated that osteophyte derived cells had similar properties to mesenchymal stem cells in the expression of antigen phenotype, differential potential and suppression of allo-immune response. Furthermore, when compared to bMSC, oMSC maintained a higher proliferative capacity due to a retained level of telomerase activity in vitro, which may account for the relatively longer telomeres delaying growth arrest by replicative senescence compared with bMSC. oMSC behaviour in defects supported chondrogenesis which implies that cells derived from regenerative tissue can be an alternative source of stem cells and have a potential clinical application for therapeutic stem cell based tissue regeneration.
Resumo:
Many surveillance applications (object tracking, abandoned object detection) rely on detecting changes in a scene. Foreground segmentation is an effective way to extract the foreground from the scene, but these techniques cannot discriminate between objects that have temporarily stopped and those that are moving. We propose a series of modifications to an existing foreground segmentation system\cite{Butler2003} so that the foreground is further segmented into two or more layers. This yields an active layer of objects currently in motion and a passive layer of objects that have temporarily ceased motion which can itself be decomposed into multiple static layers. We also propose a variable threshold to cope with variable illumination, a feedback mechanism that allows an external process (i.e. surveillance system) to alter the motion detectors state, and a lighting compensation process and a shadow detector to reduce errors caused by lighting inconsistencies. The technique is demonstrated using outdoor surveillance footage, and is shown to be able to effectively deal with real world lighting conditions and overlapping objects.
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.