346 resultados para Experimental science


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract—Corneal topography estimation that is based on the Placido disk principle relies on good quality of precorneal tear film and sufficiently wide eyelid (palpebral) aperture to avoid reflections from eyelashes. However, in practice, these conditions are not always fulfilled resulting in missing regions, smaller corneal coverage, and subsequently poorer estimates of corneal topography. Our aim was to enhance the standard operating range of a Placido disk videokeratoscope to obtain reliable corneal topography estimates in patients with poor tear film quality, such as encountered in those diagnosed with dry eye, and with narrower palpebral apertures as in the case of Asian subjects. This was achieved by incorporating in the instrument’s own topography estimation algorithm an image processing technique that comprises a polar-domain adaptive filter and amorphological closing operator. The experimental results from measurements of test surfaces and real corneas showed that the incorporation of the proposed technique results in better estimates of corneal topography, and, in many cases, to a significant increase in the estimated coverage area making such an enhanced videokeratoscope a better tool for clinicians.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The aim of this work is to develop a more complete qualitative and quantitative understanding of the in vivo histology of the human bulbar conjunctiva. Methods: Laser scanning confocal microscopy (LSCM) was used to observe and measure morphological characteristics of the bulbar conjunctiva of 11 healthy human volunteer subjects. Results: The superficial epithelial layer of the bulbar conjunctiva is seen as a mass of small cell nuclei. Cell borders are sometimes visible. The light grey borders of basal epithelial cells are clearly visible, but nuclei can not be seen. The conjunctival stroma is comprised of a dense meshwork of white fibres, through which traverse blood vessels containing cellular elements. Orifices at the epithelial surface may represent goblet cells that have opened and expelled their contents. Goblet cells are also observed in the deeper epithelial layers, as well as conjunctival microcysts and mature forms of Langerhans cells. The bulbar conjunctiva has a mean thickness of 32.9 1.1 mm, and a superficial and basal epithelial cell density of 2212 782 and 2368 741 cells/ mm2, respectively. Overall goblet and mature Langerhans cell densities are 111 58 and 23 25 cells/mm2, respectively. Conclusions: LSCM is a powerful technique for studying the human bulbar conjunctiva in vivo and quantifying key aspects of cell morphology. The observations presented here may serve as a useful marker against which changes in conjunctival morphology due to disease, surgery, drug therapy or contact lens wear can be assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Osteoporosis is a disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture. Osteoporosis affects over 200 million people worldwide, with an estimated 1.5 million fractures annually in the United States alone, and with attendant costs exceeding $10 billion dollars per annum. Osteoporosis reduces bone density through a series of structural changes to the honeycomb-like trabecular bone structure (micro-structure). The reduced bone density, coupled with the microstructural changes, results in significant loss of bone strength and increased fracture risk. Vertebral compression fractures are the most common type of osteoporotic fracture and are associated with pain, increased thoracic curvature, reduced mobility, and difficulty with self care. Surgical interventions, such as kyphoplasty or vertebroplasty, are used to treat osteoporotic vertebral fractures by restoring vertebral stability and alleviating pain. These minimally invasive procedures involve injecting bone cement into the fractured vertebrae. The techniques are still relatively new and while initial results are promising, with the procedures relieving pain in 70-95% of cases, medium-term investigations are now indicating an increased risk of adjacent level fracture following the procedure. With the aging population, understanding and treatment of osteoporosis is an increasingly important public health issue in developed Western countries. The aim of this study was to investigate the biomechanics of spinal osteoporosis and osteoporotic vertebral compression fractures by developing multi-scale computational, Finite Element (FE) models of both healthy and osteoporotic vertebral bodies. The multi-scale approach included the overall vertebral body anatomy, as well as a detailed representation of the internal trabecular microstructure. This novel, multi-scale approach overcame limitations of previous investigations by allowing simultaneous investigation of the mechanics of the trabecular micro-structure as well as overall vertebral body mechanics. The models were used to simulate the progression of osteoporosis, the effect of different loading conditions on vertebral strength and stiffness, and the effects of vertebroplasty on vertebral and trabecular mechanics. The model development process began with the development of an individual trabecular strut model using 3D beam elements, which was used as the building block for lattice-type, structural trabecular bone models, which were in turn incorporated into the vertebral body models. At each stage of model development, model predictions were compared to analytical solutions and in-vitro data from existing literature. The incremental process provided confidence in the predictions of each model before incorporation into the overall vertebral body model. The trabecular bone model, vertebral body model and vertebroplasty models were validated against in-vitro data from a series of compression tests performed using human cadaveric vertebral bodies. Firstly, trabecular bone samples were acquired and morphological parameters for each sample were measured using high resolution micro-computed tomography (CT). Apparent mechanical properties for each sample were then determined using uni-axial compression tests. Bone tissue properties were inversely determined using voxel-based FE models based on the micro-CT data. Specimen specific trabecular bone models were developed and the predicted apparent stiffness and strength were compared to the experimentally measured apparent stiffness and strength of the corresponding specimen. Following the trabecular specimen tests, a series of 12 whole cadaveric vertebrae were then divided into treated and non-treated groups and vertebroplasty performed on the specimens of the treated group. The vertebrae in both groups underwent clinical-CT scanning and destructive uniaxial compression testing. Specimen specific FE vertebral body models were developed and the predicted mechanical response compared to the experimentally measured responses. The validation process demonstrated that the multi-scale FE models comprising a lattice network of beam elements were able to accurately capture the failure mechanics of trabecular bone; and a trabecular core represented with beam elements enclosed in a layer of shell elements to represent the cortical shell was able to adequately represent the failure mechanics of intact vertebral bodies with varying degrees of osteoporosis. Following model development and validation, the models were used to investigate the effects of progressive osteoporosis on vertebral body mechanics and trabecular bone mechanics. These simulations showed that overall failure of the osteoporotic vertebral body is initiated by failure of the trabecular core, and the failure mechanism of the trabeculae varies with the progression of osteoporosis; from tissue yield in healthy trabecular bone, to failure due to instability (buckling) in osteoporotic bone with its thinner trabecular struts. The mechanical response of the vertebral body under load is highly dependent on the ability of the endplates to deform to transmit the load to the underlying trabecular bone. The ability of the endplate to evenly transfer the load through the core diminishes with osteoporosis. Investigation into the effect of different loading conditions on the vertebral body found that, because the trabecular bone structural changes which occur in osteoporosis result in a structure that is highly aligned with the loading direction, the vertebral body is consequently less able to withstand non-uniform loading states such as occurs in forward flexion. Changes in vertebral body loading due to disc degeneration were simulated, but proved to have little effect on osteoporotic vertebra mechanics. Conversely, differences in vertebral body loading between simulated invivo (uniform endplate pressure) and in-vitro conditions (where the vertebral endplates are rigidly cemented) had a dramatic effect on the predicted vertebral mechanics. This investigation suggested that in-vitro loading using bone cement potting of both endplates has major limitations in its ability to represent vertebral body mechanics in-vivo. And lastly, FE investigation into the biomechanical effect of vertebroplasty was performed. The results of this investigation demonstrated that the effect of vertebroplasty on overall vertebra mechanics is strongly governed by the cement distribution achieved within the trabecular core. In agreement with a recent study, the models predicted that vertebroplasty cement distributions which do not form one continuous mass which contacts both endplates have little effect on vertebral body stiffness or strength. In summary, this work presents the development of a novel, multi-scale Finite Element model of the osteoporotic vertebral body, which provides a powerful new tool for investigating the mechanics of osteoporotic vertebral compression fractures at the trabecular bone micro-structural level, and at the vertebral body level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of semantic grid, QoS-based Web service composition is an important problem. In semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the composition consider not only QoS properties of Web services, but also inter service dependencies and conflicts which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address the Web service composition optimization problem in the presence of domain constraints and inter service dependencies and conflicts. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here, we demonstrate that efficient nano-optical couplers can be developed using closely spaced gap plasmon waveguides in the form of two parallel nano-sized rectangular slots in a thin metal film or membrane. Using the rigorous numerical finite-difference and finite element algorithms, we investigate the physical mechanisms of coupling between two neighboring gap plasmon waveguides and determine typical coupling lengths for different structural parameters of the coupler. Special attention is focused onto the analysis of the effect of such major coupler parameters, such as thickness of the metal film/membrane, slot width, and separation between the plasmonic waveguides. Detailed physical interpretation of the obtained unusual dependencies of the coupling length on slot width and film thickness is presented based upon the energy consideration. The obtained results will be important for the optimization and experimental development of plasmonic sub-wavelength compact directional couplers and other nano-optical devices for integrated nanophotonics.