892 resultados para Shadow and Highlight Invariant Algorithm.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: This paper reports a lot-sizing and scheduling problem, which minimizes inventory and backlog costs on m parallel machines with sequence-dependent set-up times over t periods. Problem solutions are represented as product subsets ordered and/or unordered for each machine m at each period t. The optimal lot sizes are determined applying a linear program. A genetic algorithm searches either over ordered or over unordered subsets (which are implicitly ordered using a fast ATSP-type heuristic) to identify an overall optimal solution. Initial computational results are presented, comparing the speed and solution quality of the ordered and unordered genetic algorithm approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low bone mineral density (BMD) has been found in human immunodeficiency virus (HIV)-infected patients; however, data on associated factors remain unclear, specifically in middle-aged women. This study aims to evaluate factors associated with low BMD in HIV-positive women. In this cross-sectional study, a questionnaire was administered to 206 HIV-positive women aged 40 to 60 years who were receiving outpatient care. Clinical features, laboratory test results, and BMD were assessed. Yates and Pearson χ(2) tests and Poisson multiple regression analysis were performed. The median age of women was 47.7 years; 75% had nadir CD4 T-cell counts higher than 200, and 77.8% had viral loads below the detection limit. There was no association between low BMD at the proximal femur and lumbar spine (L1-L4) and risk factors associated with HIV infection and highly active antiretroviral therapy. Poisson multiple regression analysis showed that the only factor associated with low BMD at the proximal femur and lumbar spine was postmenopause status. Low BMD is present in more than one third of this population sample, in which most women are using highly active antiretroviral therapy and have a well-controlled disease. The main associated factor is related to estrogen deprivation. The present data support periodic BMD assessments in HIV-infected patients and highlight the need to implement comprehensive menopausal care for these women to prevent bone loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aeromonas genus is considered an emerging pathogen and its presence in drinking water supplies is a reason to public health concern. This study investigated the occurrence of Aeromonas in samples from collective reservoirs and wells used as drinking water sources in a peri-urban area. A total of 35 water samples were collected from collective reservoirs and 32 from wells bimonthly, from September 2007 to September 2008. Aeromonas spp determination was carried out using a Multiple-Tube Technique. Samples were inoculated into alkaline peptone water and the superficial film formed was transferred to blood agar plates amended with ampicillin. Typical Aeromonas colonies were submitted to a biochemical screening and then to biochemical tests for species differentiation. Aeromonas was detected in 13 (19%) of the 69 samples examined (6 from collective reservoirs and 7 from wells). Concentrations of Aeromonas in collective reservoirs ranged from <0.3 to 1.2 x10²MPN/100mL and, in wells, from <0.3 to 2.4 x10²MPN/100mL. The most frequent specie in the collective reservoir samples was Aeromonas spp (68%), followed by A. encheleia (14%) and A. allosaccharophila (8%) and A. hydrophila (8%). Aeromonas spp (87%) was the most frequent specie isolated from well samples, followed by A. allosacchariphila (8%), A. encheleia (2%) and A. jandaei (5%). These data show the presence and diversity of Aeromonas genus in the samples analyzed and highlight that its presence in drinking water poses a significant public health concern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Community and clinical data have suggested there is an association between trauma exposure and suicidal behavior (i.e., suicide ideation, plans and attempts). However, few studies have assessed which traumas are uniquely predictive of: the first onset of suicidal behavior, the progression from suicide ideation to plans and attempts, or the persistence of each form of suicidal behavior over time. Moreover, few data are available on such associations in developing countries. The current study addresses each of these issues. Methodology/Principal Findings: Data on trauma exposure and subsequent first onset of suicidal behavior were collected via structured interviews conducted in the households of 102,245 (age 18+) respondents from 21 countries participating in the WHO World Mental Health Surveys. Bivariate and multivariate survival models tested the relationship between the type and number of traumatic events and subsequent suicidal behavior. A range of traumatic events are associated with suicidal behavior, with sexual and interpersonal violence consistently showing the strongest effects. There is a dose-response relationship between the number of traumatic events and suicide ideation/attempt; however, there is decay in the strength of the association with more events. Although a range of traumatic events are associated with the onset of suicide ideation, fewer events predict which people with suicide ideation progress to suicide plan and attempt, or the persistence of suicidal behavior over time. Associations generally are consistent across high-, middle-, and low-income countries. Conclusions/Significance: This study provides more detailed information than previously available on the relationship between traumatic events and suicidal behavior and indicates that this association is fairly consistent across developed and developing countries. These data reinforce the importance of psychological trauma as a major public health problem, and highlight the significance of screening for the presence and accumulation of traumatic exposures as a risk factor for suicide ideation and attempt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imprinted inactivation of the paternal X chromosome in marsupials is the primordial mechanism of dosage compensation for X-linked genes between females and males in Therians. In Eutherian mammals, X chromosome inactivation (XCI) evolved into a random process in cells from the embryo proper, where either the maternal or paternal X can be inactivated. However, species like mouse and bovine maintained imprinted XCI exclusively in extraembryonic tissues. The existence of imprinted XCI in humans remains controversial, with studies based on the analyses of only one or two X-linked genes in different extraembryonic tissues. Here we readdress this issue in human term placenta by performing a robust analysis of allele-specific expression of 22 X-linked genes, including XIST, using 27 SNPs in transcribed regions. We show that XCI is random in human placenta, and that this organ is arranged in relatively large patches of cells with either maternal or paternal inactive X. In addition, this analysis indicated heterogeneous maintenance of gene silencing along the inactive X, which combined with the extensive mosaicism found in placenta, can explain the lack of agreement among previous studies. Our results illustrate the differences of XCI mechanism between humans and mice, and highlight the importance of addressing the issue of imprinted XCI in other species in order to understand the evolution of dosage compensation in placental mammals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Hammersley-Aldous-Diaconis process, infinitely many particles sit in R and at most one particle is allowed at each position. A particle at x, whose nearest neighbor to the right is at y, jumps at rate y - x to a position uniformly distributed in the interval (x, y). The basic coupling between trajectories with different initial configuration induces a process with different classes of particles. We show that the invariant measures for the two-class process can be obtained as follows. First, a stationary M/M/1 queue is constructed as a function of two homogeneous Poisson processes, the arrivals with rate, and the (attempted) services with rate rho > lambda Then put first class particles at the instants of departures (effective services) and second class particles at the instants of unused services. The procedure is generalized for the n-class case by using n - 1 queues in tandem with n - 1 priority types of customers. A multi-line process is introduced; it consists of a coupling (different from Liggett's basic coupling), having as invariant measure the product of Poisson processes. The definition of the multi-line process involves the dual points of the space-time Poisson process used in the graphical construction of the reversed process. The coupled process is a transformation of the multi-line process and its invariant measure is the transformation described above of the product measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.