937 resultados para area-based matching
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Resumo:
The performance-based liquefaction potential analysis was carried out in the present study to estimate the liquefaction return period for Bangalore, India, through a probabilistic approach. In this approach, the entire range of peak ground acceleration (PGA) and earthquake magnitudes was used in the evaluation of liquefaction return period. The seismic hazard analysis for the study area was done using probabilistic approach to evaluate the peak horizontal acceleration at bed rock level. Based on the results of the multichannel analysis of surface wave, it was found that the study area belonged to site class D. The PGA values for the study area were evaluated for site class D by considering the local site effects. The soil resistance for the study area was characterized using the standard penetration test (SPT) values obtained from 450 boreholes. These SPT data along with the PGA values obtained from the probabilistic seismic hazard analysis were used to evaluate the liquefaction return period for the study area. The contour plot showing the spatial variation of factor of safety against liquefaction and the corrected SPT values required for preventing liquefaction for a return period of 475 years at depths of 3 and 6 m are presented in this paper. The entire process of liquefaction potential evaluation, starting from collection of earthquake data, identifying the seismic sources, evaluation of seismic hazard and the assessment of liquefaction return period were carried out, and the entire analysis was done based on the probabilistic approach.
Resumo:
Multimedia mining primarily involves, information analysis and retrieval based on implicit knowledge. The ever increasing digital image databases on the Internet has created a need for using multimedia mining on these databases for effective and efficient retrieval of images. Contents of an image can be expressed in different features such as Shape, Texture and Intensity-distribution(STI). Content Based Image Retrieval(CBIR) is an efficient retrieval of relevant images from large databases based on features extracted from the image. Most of the existing systems either concentrate on a single representation of all features or linear combination of these features. The paper proposes a CBIR System named STIRF (Shape, Texture, Intensity-distribution with Relevance Feedback) that uses a neural network for nonlinear combination of the heterogenous STI features. Further the system is self-adaptable to different applications and users based upon relevance feedback. Prior to retrieval of relevant images, each feature is first clustered independent of the other in its own space and this helps in matching of similar images. Testing the system on a database of images with varied contents and intensive backgrounds showed good results with most relevant images being retrieved for a image query. The system showed better and more robust performance compared to existing CBIR systems
Resumo:
Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.
Resumo:
The first line medication for mild to moderate Alzheimer s disease (AD) is based on cholinesterase inhibitors which prolong the effect of the neurotransmitter acetylcholine in cholinergic nerve synapses which relieves the symptoms of the disease. Implications of cholinesterases involvement in disease modifying processes has increased interest in this research area. The drug discovery and development process is a long and expensive process that takes on average 13.5 years and costs approximately 0.9 billion US dollars. Drug attritions in the clinical phases are common due to several reasons, e.g., poor bioavailability of compounds leading to low efficacy or toxic effects. Thus, improvements in the early drug discovery process are needed to create highly potent non-toxic compounds with predicted drug-like properties. Nature has been a good source for the discovery of new medicines accounting for around half of the new drugs approved to market during the last three decades. These compounds are direct isolates from the nature, their synthetic derivatives or natural mimics. Synthetic chemistry is an alternative way to produce compounds for drug discovery purposes. Both sources have pros and cons. The screening of new bioactive compounds in vitro is based on assaying compound libraries against targets. Assay set-up has to be adapted and validated for each screen to produce high quality data. Depending on the size of the library, miniaturization and automation are often requirements to reduce solvent and compound amounts and fasten the process. In this contribution, natural extract, natural pure compound and synthetic compound libraries were assessed as sources for new bioactive compounds. The libraries were screened primarily for acetylcholinesterase inhibitory effect and secondarily for butyrylcholinesterase inhibitory effect. To be able to screen the libraries, two assays were evaluated as screening tools and adapted to be compatible with special features of each library. The assays were validated to create high quality data. Cholinesterase inhibitors with various potencies and selectivity were found in natural product and synthetic compound libraries which indicates that the two sources complement each other. It is acknowledged that natural compounds differ structurally from compounds in synthetic compound libraries which further support the view of complementation especially if a high diversity of structures is the criterion for selection of compounds in a library.
Assessment of insect occurrence in boreal forests based on satellite imagery and field measurements.
Resumo:
The presence/absence data of twenty-seven forest insect taxa (e.g. Retinia resinella, Formica spp., Pissodes spp., several scolytids) and recorded environmental variation were used to investigate the applicability of modelling insect occurrence based on satellite imagery. The sampling was based on 1800 sample plots (25 m by 25 m) placed along the sides of 30 equilateral triangles (side 1 km) in a fragmented forest area (approximately 100 km2) in Evo, S Finland. The triangles were overlaid on land use maps interpreted from satellite images (Landsat TM 30 m multispectral scanner imagery 1991) and digitized geological maps. Insect occurrence was explained using either environmental variables measured in the field or those interpreted from the land use and geological maps. The fit of logistic regression models varied between species, possibly because some species may be associated with the characteristics of single trees while other species with stand characteristics. The occurrence of certain insect species at least, especially those associated with Scots pine, could be relatively accurately assessed indirectly on the basis of satellite imagery and geological maps. Models based on both remotely sensed and geological data better predicted the distribution of forest insects except in the case of Xylechinus pilosus, Dryocoetes sp. and Trypodendron lineatum, where the differences were relatively small in favour of the models based on field measurements. The number of species was related to habitat compartment size and distance from the habitat edge calculated from the land use maps, but logistic regressions suggested that other environmental variables in general masked the effect of these variables in species occurrence at the present scale.
Resumo:
The purpose of this study was to establish the palaeoenvironmental conditions during the late Quaternary in Murchisonfjorden, Nordaustlandet, based on foraminiferal assemblage compositions, and to determine the onset and termination of the Weichselian glaciations. The foraminiferal assemblage compositions were studied in marine sediments from three different archives, from sections next to the present shoreline in the Bay of Isvika, from a core in the Bay of Isvika and from a core from Lake Einstaken. OSL and AMS 14C age determinations were performed on samples from the three archives, and the results show deposition of marine sediments during ice-free periods of the Early Weichselian, the Middle Weichselian and the Late Weichselian, as well as during the Holocene in the investigated area. Marine sediments from the Early and Middle Weichselian were sampled from isostatically uplifted sections along the present shoreline.Sediments from the transition from the Late Weichselian to early Holocene time intervals were found in the bottom of the core from Lake Einstaken. Holocene sediments were investigated in the sections and in the core from the Bay of Isvika. The marine sediments from the sections are comprised of five benthic foraminiferal assemblages. The Early Weichselian is represented by two foraminiferal assemblages, the Middle Weichselian, the early and the late Holocene each by one. All five foraminiferal assemblages were deposited in glacier-distal shallow-water environments, which had a connection to the open ocean. Changes in the composition of the assemblages can be ascribed to differences in the bottom-water currents and changes in the salinity. The Middle Weichselian assemblage is of special importance, because it is the first foraminiferal assemblage to be described from this time interval from Svalbard. Four benthic foraminiferal assemblages were deposited shortly before the marine to lacustrine transition at the boundary between the Late Weichselian and Holocene in Lake Einstaken. The foraminiferal assemblages show a change from a high-arctic, normal marine shallow-water environment to an even shallower environment with highly fluctuating salinity. The analyses of the core from 100 m water depth in the Bay of Isvika resulted in the determination of four foraminiferal assemblages. These indicated changes from a glacier-proximal environment during deglaciation, to a more glacier-distal environment during the Early Holocene. This was followed by a period with a marked change to a considerably cooler environment and finally to a closed fjord environment in the middle and late Holocene times. Additional sedimentological analyses of the marine and glacially derived sediments from the uplifted sections, as well as observations of multiple striae on the bedrock, observations of deeply weathered bedrock and findings of tills interlayered with marine sediments complete the investigations in the study area. They indicate weak glacial erosion in the study area. It can be concluded that marine deposition occurred in the investigated area during three time intervals in the Weichselian and during most of the Holocene. The foraminiferal assemblages in the Holocene are characterized by a transition from glacier-proximal to glacier-distal faunas. The palaeogeographical change from an open fjord to a closed fjord environment is a result of the isostatic uplift of the area after the LGM and is clearly reflected in the foraminiferal assemblages. Another influencing factor on the foraminiferal assemblage composition are changes in the inflow of warmer Atlantic waters to the study area.
Resumo:
New chemical entities with unfavorable water solubility properties are continuously emerging in drug discovery. Without pharmaceutical manipulations inefficient concentrations of these drugs in the systemic circulation are probable. Typically, in order to be absorbed from the gastrointestinal tract, the drug has to be dissolved. Several methods have been developed to improve the dissolution of poorly soluble drugs. In this study, the applicability of different types of mesoporous (pore diameters between 2 and 50 nm) silicon- and silica-based materials as pharmaceutical carriers for poorly water soluble drugs was evaluated. Thermally oxidized and carbonized mesoporous silicon materials, ordered mesoporous silicas MCM-41 and SBA-15, and non-treated mesoporous silicon and silica gel were assessed in the experiments. The characteristic properties of these materials are the narrow pore diameters and the large surface areas up to over 900 m²/g. Loading of poorly water soluble drugs into these pores restricts their crystallization, and thus, improves drug dissolution from the materials as compared to the bulk drug molecules. In addition, the wide surface area provides possibilities for interactions between the loaded substance and the carrier particle, allowing the stabilization of the system. Ibuprofen, indomethacin and furosemide were selected as poorly soluble model drugs in this study. Their solubilities are strongly pH-dependent and the poorest (< 100 µg/ml) at low pH values. The pharmaceutical performance of the studied materials was evaluated by several methods. In this work, drug loading was performed successfully using rotavapor and fluid bed equipment in a larger scale and in a more efficient manner than with the commonly used immersion methods. It was shown that several carrier particle properties, in particular the pore diameter, affect the loading efficiency (typically ~25-40 w-%) and the release rate of the drug from the mesoporous carriers. A wide pore diameter provided easier loading and faster release of the drug. The ordering and length of the pores also affected the efficiency of the drug diffusion. However, these properties can also compensate the effects of each other. The surface treatment of porous silicon was important in stabilizing the system, as the non-treated mesoporous silicon was easily oxidized at room temperature. Different surface chemical treatments changed the hydrophilicity of the porous silicon materials and also the potential interactions between the loaded drug and the particle, which further affected the drug release properties. In all of the studies, it was demonstrated that loading into mesoporous silicon and silica materials improved the dissolution of the poorly soluble drugs as compared to the corresponding bulk compounds (e.g. after 30 min ~2-7 times more drug was dissolved depending on the materials). The release profile of the loaded substances remained similar also after 3 months of storage at 30°C/56% RH. The thermally carbonized mesoporous silicon did not compromise the Caco-2 monolayer integrity in the permeation studies and improved drug permeability was observed. The loaded mesoporous silica materials were also successfully compressed into tablets without compromising their characteristic structural and drug releasing properties. The results of this research indicated that mesoporous silicon/silica-based materials are promising materials to improve the dissolution of poorly water soluble drugs. Their feasibility in pharmaceutical laboratory scale processes was also confirmed in this thesis.
Resumo:
We present a laser-based system to measure the refractive index of air over a long path length. In optical distance measurements it is essential to know the refractive index of air with high accuracy. Commonly, the refractive index of air is calculated from the properties of the ambient air using either Ciddor or Edlén equations, where the dominant uncertainty component is in most cases the air temperature. The method developed in this work utilises direct absorption spectroscopy of oxygen to measure the average temperature of air and of water vapor to measure relative humidity. The method allows measurement of temperature and humidity over the same beam path as in optical distance measurement, providing spatially well matching data. Indoor and outdoor measurements demonstrate the effectiveness of the method. In particular, we demonstrate an effective compensation of the refractive index of air in an interferometric length measurement at a time-variant and spatially non-homogenous temperature over a long time period. Further, we were able to demonstrate 7 mK RMS noise over a 67 m path length using 120 s sample time. To our knowledge, this is the best temperature precision reported for a spectroscopic temperature measurement.
Resumo:
Atomic layer deposition (ALD) is a method to deposit thin films from gaseous precursors to the substrate layer-by-layer so that the film thickness can be tailored with atomic layer accuracy. Film tailoring is even further emphasized with selective-area ALD which enables the film growth to be controlled also on the substrate surface. Selective-area ALD allows the decrease of a process steps in preparing thin film devices. This can be of a great technological importance when the ALD films become into wider use in different applications. Selective-area ALD can be achieved by passivation or activation of a surface. In this work ALD growth was prevented by octadecyltrimethoxysilane, octadecyltrichlorosilane and 1-dodecanethiol SAMs, and by PMMA (polymethyl methacrylate) and PVP (poly(vinyl pyrrolidone) polymer films. SAMs were prepared from vapor phase and by microcontact printing, and polymer films were spin coated. Microcontact printing created patterned SAMs at once. The SAMs prepared from vapor phase and the polymer mask layers were patterned by UV lithography or lift-off process so that after preparation of a continuous mask layer selected areas of them were removed. On these areas the ALD film was deposited selectively. SAMs and polymer films prevented the growth in several ALD processes such as iridium, ruthenium, platinum, TiO2 and polyimide so that the ALD films did grow only on areas without SAM or polymer mask layer. PMMA and PVP films also protected the surface against Al2O3 and ZrO2 growth. Activation of the surface for ALD of ruthenium was achieved by preparing a RuOX layer by microcontact printing. At low temperatures the RuCp2-O2 process nucleated only on this oxidative activation layer but not on bare silicon.
Resumo:
A new series of substituted perovskites of the type LaCr1−xMxO3−δ, where M=Cu or Mg have been synthesised by the citrate gel process and characterized by means of powder X-ray diffraction, infrared spectroscopy, selected area diffraction and also by electron paramagnetic resonance spectroscopy. The general powder morphology was also observed using scanning electron microscopy. 40 mole percent substitution of Cr3+ by Cu2+ or Mg2+ have shown to result in single phase perovskite structure. Beyond x=0.5, a new phase has been identified in a narrow compositional range. Effect of Cu and Mg substitution on the sinterability of pure LaCrO3 has also been studied. It is possible to get near theoretically dense materials at a temperature as low as 1200°C in air by copper substitution.
Resumo:
High performance video standards use prediction techniques to achieve high picture quality at low bit rates. The type of prediction decides the bit rates and the image quality. Intra Prediction achieves high video quality with significant reduction in bit rate. This paper present an area optimized architecture for Intra prediction, for H.264 decoding at HDTV resolution with a target of achieving 60 fps. The architecture was validated on Virtex-5 FPGA based platform. The architecture achieves a frame rate of 64 fps. The architecture is based on multi-level memory hierarchy to reduce latency and ensure optimum resources utilization. It removes redundancy by reusing same functional blocks across different modes. The proposed architecture uses only 13% of the total LUTs available on the Xilinx FPGA XC5VLX50T.
Resumo:
Reaction between the various species in slag and metal phase is usually mass transfer controlled. There have been continuous efforts to increase the reaction efficiency in slag-metal system, especially during decarburization of steel to produce the ultra low carbon steel (ULCS) in secondary steelmaking. It has been found that the surface reaction is a dominant factor in the final stage of decarburization. In the initial stage, the inner site reaction is major factor in the refining process. The mixing of bath affects the later reaction. However, the former reaction (surface reaction) is affected by the plume size area at the top of the metal surface. Therefore, a computational study has been made to understand the fluid dynamics of a new secondary steelmaking process called Revolutionary Degasser Activator (REDA) to study the bath mixing and plume area. REDA process has been considered as it is claimed that this process can reduce the carbon content in steel below 10ppm in a less time than the other existing processes such as RH and Tank degasser. This study shows that both bath mixing and plume area are increased in REDA process facilitating it to give the desired carbon content in less time. Qualitative comments are made on slag-metal reaction system based on this finding.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.