865 resultados para Filmic approach methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sinusoidal structured light projection (SSLP) technique, specifically-phase stepping method, is in widespread use to obtain accurate, dense 3-D data. But, if the object under investigation possesses surface discontinuities, phase unwrapping (an intermediate step in SSLP) stage mandatorily require several additional images, of the object with projected fringes (of different spatial frequencies), as input to generate a reliable 3D shape. On the other hand, Color-coded structured light projection (CSLP) technique is known to require a single image as in put, but generates sparse 3D data. Thus we propose the use of CSLP in conjunction with SSLP to obtain dense 3D data with minimum number of images as input. This approach is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with 32 fringes in the projected pattern is carried out with both the methods, new strategy proposed requires only 5 frames as compared to 24 frames required by the later method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A complete vibrational analysis was performed on the molecular structure of boldine hydrochloride using QM/MM method. The equilibrium geometry, harmonic vibrational frequencies and infrared intensities were calculated by QM/MM method with B3LYP/6-31G(d) and universal force field (UFF) combination using ONIOM code. We found the geometry obtained by the QM/MM method to be very accurate, and we can use this rapid method in place of time consuming ab initio methods for large molecules. A detailed interpretation of the infrared spectra of boldine hydrochloride is reported. The scaled theoretical wave numbers are in perfect agreement with the experimental values. The FT-IR spectra of boldine hydrochloride in the region 4000-500 cm(-1) were recorded in CsI (solid phase) and in chloroform with concentration 5 and 10 mg/ml.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of health care, information technology (IT) has an important role in the operational infrastructure, ranging from business management to patient care. An essential part of the system is medication management in inpatient and outpatient care. Community pharmacists strategy has been to extend practice responsibilities beyond dispensing towards patient care services. Few studies have evaluated the strategic development of IT systems to support this vision. The objectives of this study were to assess and compare independent Finnish community pharmacy owners and staff pharmacists priorities concerning the content and structure of the next generation of community pharmacy IT systems, to explore international experts visions and strategic views on IT development needs in relation to services provided in community pharmacies, to identify IT innovations facilitating patient care services and to evaluate their development and implementation processes, and to assess community pharmacists readiness to adopt innovations. This study applied both qualitative and quantitative methods. A qualitative personal interview of 14 experts in community pharmacy services and related IT from eight countries and a national survey of Finnish community pharmacy owners (mail survey, response rate 53%, n=308), and of a representative sample of staff pharmacists (online survey, response rate 22%, n=373) were conducted. Finnish independent community pharmacy owners gave priority to logistical functions but also to those related to medication information and patient care. The managers and staff pharmacists have different views of the importance of IT features, reflecting their different professional duties in the community pharmacy. This indicates the need for involving different occupation groups in planning the new IT systems for community pharmacies. A majority of the international experts shared the vision of community pharmacy adopting a patient care orientation; supported by IT-based documentation, new technological solutions, access to information, and shared patient data. Community pharmacy IT innovations were rare, which is paradoxical because owners and staff pharmacists perception of their innovativeness was seen as being high. Community pharmacy IT systems development processes usually had not undergone systematic needs assessment research beforehand or evaluation after the implementation and were most often coordinated by national governments without subsequent commercialization. Specifically, community pharmacy IT developments lack research, organization, leadership and user involvement in the process. Those responsible for IT development in the community pharmacy sector should create long-term IT development strategies that are in line with community pharmacy service development strategies. This could provide systematic guidance for future projects to ensure that potential innovations are based on a sufficient understanding of pharmacy practice problems that they are intended to solve, and to encourage strong leadership in research, development of innovations so that community pharmacists potential innovativeness is used, and that professional needs and strategic priorities will be considered even if the development process is led by those outside the profession.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the development of a new model for the cooling process on the runout table of hot strip mills, The suitability of different numerical methods for the solution of the proposed model equation from the point of view of accuracy and computation time are studied, Parallel solutions for the model equation are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss three methods to correct spherical aberration for a point to point imaging system. First, results obtained using Fermat's principle and the ray tracing method are described briefly. Next, we obtain solutions using Lie algebraic techniques. Even though one cannot always obtain analytical results using this method, it is often more powerful than the first method. The result obtained with this approach is compared and found to agree with the exact result of the first method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfect or even mediocre weather predictions over a long period are almost impossible because of the ultimate growth of a small initial error into a significant one. Even though the sensitivity of initial conditions limits the predictability in chaotic systems, an ensemble of prediction from different possible initial conditions and also a prediction algorithm capable of resolving the fine structure of the chaotic attractor can reduce the prediction uncertainty to some extent. All of the traditional chaotic prediction methods in hydrology are based on single optimum initial condition local models which can model the sudden divergence of the trajectories with different local functions. Conceptually, global models are ineffective in modeling the highly unstable structure of the chaotic attractor. This paper focuses on an ensemble prediction approach by reconstructing the phase space using different combinations of chaotic parameters, i.e., embedding dimension and delay time to quantify the uncertainty in initial conditions. The ensemble approach is implemented through a local learning wavelet network model with a global feed-forward neural network structure for the phase space prediction of chaotic streamflow series. Quantification of uncertainties in future predictions are done by creating an ensemble of predictions with wavelet network using a range of plausible embedding dimensions and delay times. The ensemble approach is proved to be 50% more efficient than the single prediction for both local approximation and wavelet network approaches. The wavelet network approach has proved to be 30%-50% more superior to the local approximation approach. Compared to the traditional local approximation approach with single initial condition, the total predictive uncertainty in the streamflow is reduced when modeled with ensemble wavelet networks for different lead times. Localization property of wavelets, utilizing different dilation and translation parameters, helps in capturing most of the statistical properties of the observed data. The need for taking into account all plausible initial conditions and also bringing together the characteristics of both local and global approaches to model the unstable yet ordered chaotic attractor of a hydrologic series is clearly demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instruction scheduling with an automaton-based resource conflict model is well-established for normal scheduling. Such models have been generalized to software pipelining in the modulo-scheduling framework. One weakness with existing methods is that a distinct automaton must be constructed for each combination of a reservation table and initiation interval. In this work, we present a different approach to model conflicts. We construct one automaton for each reservation table which acts as a compact encoding of all the conflict automata for this table, which can be recovered for use in modulo-scheduling. The basic premise of the construction is to move away from the Proebsting-Fraser model of conflict automaton to the Muller model of automaton modelling issue sequences. The latter turns out to be useful and efficient in this situation. Having constructed this automaton, we show how to improve the estimate of resource constrained initiation interval. Such a bound is always better than the average-use estimate. We show that our bound is safe: it is always lower than the true initiation interval. This use of the automaton is orthogonal to its use in modulo-scheduling. Once we generate the required information during pre-processing, we can compute the lower bound for a program without any further reference to the automaton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satisfiability algorithms for propositional logic have improved enormously in recently years. This improvement increases the attractiveness of satisfiability methods for first-order logic that reduce the problem to a series of ground-level satisfiability problems. R. Jeroslow introduced a partial instantiation method of this kind that differs radically from the standard resolution-based methods. This paper lays the theoretical groundwork for an extension of his method that is general enough and efficient enough for general logic programming with indefinite clauses. In particular we improve Jeroslow's approach by (1) extending it to logic with functions, (2) accelerating it through the use of satisfiers, as introduced by Gallo and Rago, and (3) simplifying it to obtain further speedup. We provide a similar development for a "dual" partial instantiation approach defined by Hooker and suggest a primal-dual strategy. We prove correctness of the primal and dual algorithms for full first-order logic with functions, as well as termination on unsatisfiable formulas. We also report some preliminary computational results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a model of the solar dynamo in which, on the one hand, we follow the Babcock-Leighton approach to include surface processes, such as the production of poloidal field from the decay of active regions, and, on the other hand, we attempt to develop a mean field theory that can be studied in quantitative detail. One of the main challenges in developing such models is to treat the buoyant rise of the toroidal field and the production of poloidal field from it near the surface. A previous paper by Choudhuri, Schüssler, & Dikpati in 1995 did not incorporate buoyancy. We extend this model by two contrasting methods. In one method, we incorporate the generation of the poloidal field near the solar surface by Durney's procedure of double-ring eruption. In the second method, the poloidal field generation is treated by a positive α-effect concentrated near the solar surface coupled with an algorithm for handling buoyancy. The two methods are found to give qualitatively similar results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delineation of homogeneous precipitation regions (regionalization) is necessary for investigating frequency and spatial distribution of meteorological droughts. The conventional methods of regionalization use statistics of precipitation as attributes to establish homogeneous regions. Therefore they cannot be used to form regions in ungauged areas, and they may not be useful to form meaningful regions in areas having sparse rain gauge density. Further, validation of the regions for homogeneity in precipitation is not possible, since the use of the precipitation statistics to form regions and subsequently to test the regional homogeneity is not appropriate. To alleviate this problem, an approach based on fuzzy cluster analysis is presented. It allows delineation of homogeneous precipitation regions in data sparse areas using large scale atmospheric variables (LSAV), which influence precipitation in the study area, as attributes. The LSAV, location parameters (latitude, longitude and altitude) and seasonality of precipitation are suggested as features for regionalization. The approach allows independent validation of the identified regions for homogeneity using statistics computed from the observed precipitation. Further it has the ability to form regions even in ungauged areas, owing to the use of attributes that can be reliably estimated even when no at-site precipitation data are available. The approach was applied to delineate homogeneous annual rainfall regions in India, and its effectiveness is illustrated by comparing the results with those obtained using rainfall statistics, regionalization based on hard cluster analysis, and meteorological sub-divisions in India. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical modeling of several turbulent nonreacting and reacting spray jets is carried out using a fully stochastic separated flow (FSSF) approach. As is widely used, the carrier-phase is considered in an Eulerian framework, while the dispersed phase is tracked in a Lagrangian framework following the stochastic separated flow (SSF) model. Various interactions between the two phases are taken into account by means of two-way coupling. Spray evaporation is described using a thermal model with an infinite conductivity in the liquid phase. The gas-phase turbulence terms are closed using the k-epsilon model. A novel mixture fraction based approach is used to stochastically model the fluctuating temperature and composition in the gas phase and these are then used to refine the estimates of the heat and mass transfer rates between the droplets and the surrounding gas-phase. In classical SSF (CSSF) methods, stochastic fluctuations of only the gas-phase velocity are modeled. Successful implementation of the FSSF approach to turbulent nonreacting and reacting spray jets is demonstrated. Results are compared against experimental measurements as well as with predictions using the CSSF approach for both nonreacting and reacting spray jets. The FSSF approach shows little difference from the CSSF predictions for nonreacting spray jets but differences are significant for reacting spray jets. In general, the FSSF approach gives good predictions of the flame length and structure but further improvements in modeling may be needed to improve the accuracy of some details of the Predictions. (C) 2011 The Combustion Institute. Published by Elsevier Inc. All rights reserved.