876 resultados para Engineering, Electronics and Electrical|Computer Science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have used various computational methodologies including molecular dynamics, density functional theory, virtual screening, ADMET predictions and molecular interaction field studies to design and analyze four novel potential inhibitors of farnesyltransferase (FTase). Evaluation of two proposals regarding their drug potential as well as lead compounds have indicated them as novel promising FTase inhibitors, with theoretically interesting pharmacotherapeutic profiles, when Compared to the very active and most cited FTase inhibitors that have activity data reported, which are launched drugs or compounds in clinical tests. One of our two proposals appears to be a more promising drug candidate and FTase inhibitor, but both derivative molecules indicate potentially very good pharmacotherapeutic profiles in comparison with Tipifarnib and Lonafarnib, two reference pharmaceuticals. Two other proposals have been selected with virtual screening approaches and investigated by LIS, which suggest novel and alternatives scaffolds to design future potential FTase inhibitors. Such compounds can be explored as promising molecules to initiate a research protocol in order to discover novel anticancer drug candidates targeting farnesyltransferase, in the fight against cancer. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demand for more pixels is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers a solution to augment the pixel capacity of a display. However, problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. We present the results obtained on a desktop-size tiled projector array of three D-ILA projectors sharing a common illumination source. A short throw lens (0.8:1) on each projector yields a 21-in. diagonal for each image tile; the composite image on a 3×1 array is 3840×1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact, and can fit in a normal room or laboratory. The projectors are mounted on precision six-axis positioners, which allow pixel level alignment. A fiber optic beamsplitting system and a single set of red, green, and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted separately to set or change characteristics such as contrast, brightness, or gamma curves. The projectors were then matched carefully: photometric variations were corrected, leading to a seamless image. Photometric measurements were performed to characterize the display and are reported here. This system is driven by a small PC cluster fitted with graphics cards and running Linux. It can be scaled to accommodate an array of 2×3 or 3×3 projectors, thus increasing the number of pixels of the final image. Finally, we present current uses of the display in fields such as astrophysics and archaeology (remote sensing).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The task of segmenting cell nuclei from cytoplasm in conventional Papanicolaou (Pap) stained cervical cell images is a classical image analysis problem which may prove to be crucial to the development of successful systems which automate the analysis of Pap smears for detection of cancer of the cervix. Although simple thresholding techniques will extract the nucleus in some cases, accurate unsupervised segmentation of very large image databases is elusive. Conventional active contour models as introduced by Kass, Witkin and Terzopoulos (1988) offer a number of advantages in this application, but suffer from the well-known drawbacks of initialisation and minimisation. Here we show that a Viterbi search-based dual active contour algorithm is able to overcome many of these problems and achieve over 99% accurate segmentation on a database of 20 130 Pap stained cell images. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe a model of the human visual system (HVS) based on the wavelet transform. This model is largely based on a previously proposed model, but has a number of modifications that make it more amenable to potential integration into a wavelet based image compression scheme. These modifications include the use of a separable wavelet transform instead of the cortex transform, the application of a wavelet contrast sensitivity function (CSP), and a simplified definition of subband contrast that allows us to predict noise visibility directly from wavelet coefficients. Initially, we outline the luminance, frequency, and masking sensitivities of the HVS and discuss how these can be incorporated into the wavelet transform. We then outline a number of limitations of the wavelet transform as a model of the HVS, namely the lack of translational invariance and poor orientation sensitivity. In order to investigate the efficacy of this wavelet based model, a wavelet visible difference predictor (WVDP) is described. The WVDP is then used to predict visible differences between an original and compressed (or noisy) image. Results are presented to emphasize the limitations of commonly used measures of image quality and to demonstrate the performance of the WVDP, The paper concludes with suggestions on bow the WVDP can be used to determine a visually optimal quantization strategy for wavelet coefficients and produce a quantitative measure of image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a numerical methodology for the study of convective pore-fluid, thermal and mass flow in fluid-saturated porous rock basins. lit particular, we investigate the occurrence and distribution pattern of temperature gradient driven convective pore-fluid flow and hydrocarbon transport in the Australian North West Shelf basin. The related numerical results have demonstrated that: (1) The finite element method combined with the progressive asymptotic approach procedure is a useful tool for dealing with temperature gradient driven pore-fluid flow and mass transport in fluid-saturated hydrothermal basins; (2) Convective pore-fluid flow generally becomes focused in more permeable layers, especially when the layers are thick enough to accommodate the appropriate convective cells; (3) Large dislocation of strata has a significant influence off the distribution patterns of convective pore;fluid flow, thermal flow and hydrocarbon transport in the North West Shelf basin; (4) As a direct consequence of the formation of convective pore-fluid cells, the hydrocarbon concentration is highly localized in the range bounded by two major faults in the basin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is devoted to the problems of finding the load flow feasibility, saddle node, and Hopf bifurcation boundaries in the space of power system parameters. The first part contains a review of the existing relevant approaches including not-so-well-known contributions from Russia. The second part presents a new robust method for finding the power system load flow feasibility boundary on the plane defined by any three vectors of dependent variables (nodal voltages), called the Delta plane. The method exploits some quadratic and linear properties of the load now equations and state matrices written in rectangular coordinates. An advantage of the method is that it does not require an iterative solution of nonlinear equations (except the eigenvalue problem). In addition to benefits for visualization, the method is a useful tool for topological studies of power system multiple solution structures and stability domains. Although the power system application is developed, the method can be equally efficient for any quadratic algebraic problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This note considers continuous-time Markov chains whose state space consists of an irreducible class, C, and an absorbing state which is accessible from C. The purpose is to provide results on mu-invariant and mu-subinvariant measures where absorption occurs with probability less than one. In particular, the well-known premise that the mu-invariant measure, m, for the transition rates be finite is replaced by the more natural premise that m be finite with respect to the absorption probabilities. The relationship between mu-invariant measures and quasi-stationary distributions is discussed. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reported experimental work on the systems Fe-Zn-O and Fe-Zn-Si-O in equilibrium with metallic iron is part of a wider research program that combines experimental and thermodynamic computer modeling techniques to characterize zinc/lead industrial slags and sinters in the system PbO-ZnO-SiO2-CaO-FeO-Fe2O3. Extensive experimental,investigations using high-temperature equilibration and quenching techniques followed by electron probe X-ray microanalysis (EPMA) were carried out. Special experimental; procedures were developed to enable accurate measurements in these ZnO-containing systems to be performed in equilibrium with metallic iron; The systems Fe-Zn-O and FeZn-Si-O were experimentally investigated in equilibrium with metallic iron in the temperature ranges 900 degreesC to 1200 degreesC (1173 to 1473 K) and from 1000 degreesC to 1350 degreesC (1273 to 1623 K), respectively. The liquidus surface in the system Fe-Zn-Si-O in equilibrium with metallic iron was characterized in the composition ranges 0 to 33 wt pet ZnO and 0 to 40 wt pet SiO2. The wustite (Fe,Zn)O, zincite (Zn,Fe)O, willemite (Zn,Fe)(2)SiO4, arid fayalite: (Fe,Zn)(2)SiO4 solid solutions in equilibrium with metallic iron were measured.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simultaneous design of the steady-state and dynamic performance of a process has the ability to satisfy much more demanding dynamic performance criteria than the design of dynamics only by the connection of a control system. A method for designing process dynamics based on the use of a linearised systems' eigenvalues has been developed. The eigenvalues are associated with system states using the unit perturbation spectral resolution (UPSR), characterising the dynamics of each state. The design method uses a homotopy approach to determine a final design which satisfies both steady-state and dynamic performance criteria. A highly interacting single stage forced circulation evaporator system, including control loops, was designed by this method with the goal of reducing the time taken for the liquid composition to reach steady-state. Initially the system was successfully redesigned to speed up the eigenvalue associated with the liquid composition state, but this did not result in an improved startup performance. Further analysis showed that the integral action of the composition controller was the source of the limiting eigenvalue. Design changes made to speed up this eigenvalue did result in an improved startup performance. The proposed approach provides a structured way to address the design-control interface, giving significant insight into the dynamic behaviour of the system such that a systematic design or redesign of an existing system can be undertaken with confidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Field and laboratory observations have shown that a relatively low beach groundwater table enhances beach accretion. These observations have led to the beach dewatering technique (artificially lowering the beach water table) for combating beach erosion. Here we present a process-based numerical model that simulates the interacting wave motion on the beach. coastal groundwater flow, swash sediment transport and beach profile changes. Results of model simulations demonstrate that the model replicates accretionary effects of a low beach water table on beach profile changes and has the potential to become a tool for assessing the effectiveness of beach dewatering systems. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the finite element simulations of reactive mineral carrying fluids mixing and mineralization in pore-fluid saturated hydrothermal/sedimentary basins. In particular we explore the mixing of reactive sulfide and sulfate fluids and the relevant patterns of mineralization for Load, zinc and iron minerals in the regime of temperature-gradient-driven convective flow. Since the mineralization and ore body formation may last quite a long period of time in a hydrothermal basin, it is commonly assumed that, in the geochemistry, the solutions of minerals are in an equilibrium state or near an equilibrium state. Therefore, the mineralization rate of a particular kind of mineral can be expressed as the product of the pore-fluid velocity and the equilibrium concentration of this particular kind of mineral Using the present mineralization rate of a mineral, the potential of the modern mineralization theory is illustrated by means of finite element studies related to reactive mineral-carrying fluids mixing problems in materially homogeneous and inhomogeneous porous rock basins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.