932 resultados para Mini-scale method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This new and general method here called overflow current switching allows a fast, continuous, and smooth transition between scales in wide-range current measurement systems, like electrometers. This is achieved, using a hydraulic analogy, by diverting only the overflow current, such that no slow element is forced to change its state during the switching. As a result, this approach practically eliminates the long dead time in low-current (picoamperes) switching. Similar to a logarithmic scale, a composition of n adjacent linear scales, like a segmented ruler, measures the current. The use of a linear wide-range system based on this technique assures fast and continuous measurement in the entire range, without blind regions during transitions and still holding suitable accuracy for many applications. A full mathematical development of the method is given. Several computer realistic simulations demonstrated the viability of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new algebraic-graph method for identification of islanding in power system grids is proposed. The proposed method identifies all the possible cases of islanding, due to the loss of a equipment, by means of a factorization of the bus-branch incidence matrix. The main features of this new method include: (i) simple implementation, (ii) high speed, (iii) real-time adaptability, (iv) identification of all islanding cases and (v) identification of the buses that compose each island in case of island formation. The method was successfully tested on large-scale systems such as the reduced south Brazilian system (45 buses/72 branches) and the south-southeast Brazilian system (810 buses/1340 branches). (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of the core-annular flow pattern, where a thin fluid surrounds a very viscous one, has been suggested as an attractive artificial-lift method for heavy oils in the current Brazilian ultra-deepwater production scenario. This paper reports the pressure drop measurements and the core-annular flow observed in a 2 7/8-inch and 300 meter deep pilot-scale well conveying a mixture of heavy crude oil (2000 mPa.s and 950 kg/m3 at 35 C) and water at several combinations of the individual flow rates. The two-phase pressure drop data are compared with those of single-phase oil flow to assess the gains due to water injection. Another issue is the handling of the core-annular flow once it has been established. High-frequency pressure-gradient signals were collected and a treatment based on the Gabor transform together with neural networks is proposed as a promising solution for monitoring and control. The preliminary results are encouraging. The pilot-scale tests, including long-term experiments, were conducted in order to investigate the applicability of using water to transport heavy oils in actual wells. It represents an important step towards the full scale application of the proposed artificial-lift technology. The registered improvements in terms of oil production rate and pressure drop reductions are remarkable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of outcomes after aesthetic surgery still is a challenge in plastic surgery. The evaluation frequently is based on subjective criteria. This study used a new clinical grading scale to evaluate aesthetic results for plastic surgeries to the abdomen. The method scores each of the following five parameters: volume of subcutaneous tissue, contour, excess of skin, aspect of the navel, and quality of the scar on the abdominal wall. The scale options are 0 (poor), 1 (fair), and 2 (good), and the total rate can range from 0 to 10. The study included 40 women ages 18-53 years. Of these 40 women, 20 underwent traditional abdominoplasty, and 20 had liposuction alone. Preoperatively and at least 1 year later, photographic results were analyzed and scored by three independent plastic surgeons. In the abdominoplasty group, the average grade rose from 2.9 +/- A 0.4 to 6.8 +/- A 0.4 postoperatively. In the liposuction group, the average grade was 5.3 +/- A 0.5 preoperatively and 7.7 +/- A 0.4 postoperatively. In both groups, the average postoperative grade was significantly higher than the preoperative grade. The mean scores for groups A and L were significantly different, demonstrating that the scale was sensitive in identifying different anatomic abnormalities in the abdomen. The rating scale used for the aesthetic evaluation of the abdomen was effective in the analysis of two different procedures: conventional abdominoplasty and liposuction. Abdominoplasty provided the greater gain according to a comparison of the pre- and postoperative scores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploration of novel synthetic methodologies that control both size and shape of functional nanostructure opens new avenues for the functional application of nanomaterials. Here, we report a new and versatile approach to synthesize SnO2 nanocrystals (rutile-type structure) using microwave-assisted hydrothermal method. Broad peaks in the X-ray diffraction spectra indicate the nanosized nature of the samples which were indexed as a pure cassiterite tetragonal phase. Chemically and physically adsorbed water was estimated by TGA data and FT-Raman spectra to account for a new broad peak around 560 cm(-1) which is related to defective surface modes. In addition, the spherical-like morphology and low dispersed distribution size around 3-5 nm were investigated by HR-TEM and FE-SEM microscopies. Room temperature PL emission presents two broad bands at 438 and 764 nm, indicating the existence of different recombination centers. When the size of the nanospheres decreases, the relative intensity of 513 nm emission increases and the 393 nm one decreases. UV-Visible spectra show substantial changes in the optical absorbance of crystalline SnO2 nanoparticles while the existence of a small tail points out the presence of localized levels inside the forbidden band gap and supplies the necessary condition for the PL emission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Brazil, the large quantities of solid waste produced are out of step with public policies, technological developments, and government budgets for the division. In small municipalities, the common lack of technological knowledge and financial conditions for suitable waste disposal has resulted in a large number of illegal dumps. Therefore, small sanitary landfill facilities are working with simplified operations focusing on cost reduction and meeting the economic and technological standards of the city without endangering the environment or public health. Currently, this activity is regulated at a federal level although there is some uncertainty regarding the risk of soil and aquifer contamination as theses facilities do not employ liners. Thus, this work evaluates a small landfill to identify changes in soil and groundwater using geotechnical parameters, monitoring wells, and geophysical tests performed by electrical profiling. It is verified that based on current conditions, no contaminants have migrated via underground water aquifers, and overall no significant changes have occurred in the soil. It is concluded that, despite its simplicity, the method investigated is a viable alternative for the final disposal of municipal solid waste from small cities, especially in developing countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper presents an interpretation of a classic optical flow method by Nagel and Enkelmann as a tensor-driven anisotropic diffusion approach in digital image analysis. We introduce an improvement into the model formulation, and we establish well-posedness results for the resulting system of parabolic partial differential equations. Our method avoids linearizations in the optical flow constraint, and it can recover displacement fields which are far beyond the typical one-pixel limits that are characteristic for many differential methods for optical flow recovery. A robust numerical scheme is presented in detail. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales. The high accuracy of the proposed method is demonstrated by means of a synthetic and a real-world image sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe